I0824 04:01:52.442869 7 e2e.go:243] Starting e2e run "13977ac3-fb95-481e-b5b9-e3a3c05a0f4f" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598241699 - Will randomize all specs Will run 215 of 4413 specs Aug 24 04:01:53.851: INFO: >>> kubeConfig: /root/.kube/config Aug 24 04:01:53.909: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 24 04:01:54.095: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 24 04:01:54.251: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 24 04:01:54.251: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 24 04:01:54.251: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 24 04:01:54.295: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 24 04:01:54.295: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 24 04:01:54.295: INFO: e2e test version: v1.15.12 Aug 24 04:01:54.299: INFO: kube-apiserver version: v1.15.12 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 24 04:01:54.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods Aug 24 04:01:54.619: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Aug 24 04:01:58.680: INFO: Pod pod-hostip-4b933763-9faa-4188-974d-239e893772d1 has hostIP: 172.18.0.9 [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 24 04:01:58.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2130" for this suite. Aug 24 04:02:20.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 24 04:02:21.102: INFO: namespace pods-2130 deletion completed in 22.401316792s • [SLOW TEST:26.800 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 24 04:02:21.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 24 04:02:21.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4830" for this suite. Aug 24 04:02:43.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 24 04:02:43.471: INFO: namespace pods-4830 deletion completed in 22.187270735s • [SLOW TEST:22.365 seconds] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 24 04:02:43.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-0984df1f-9163-465e-b388-848112651d7e STEP: Creating a pod to test consume configMaps Aug 24 04:02:43.618: INFO: Waiting up to 5m0s for pod "pod-configmaps-9c0aa51b-9a83-4157-ba9f-0fa35c35a332" in namespace "configmap-5492" to be "success or failure" Aug 24 04:02:43.657: INFO: Pod "pod-configmaps-9c0aa51b-9a83-4157-ba9f-0fa35c35a332": Phase="Pending", Reason="", readiness=false. Elapsed: 38.126862ms Aug 24 04:02:46.002: INFO: Pod "pod-configmaps-9c0aa51b-9a83-4157-ba9f-0fa35c35a332": Phase="Pending", Reason="", readiness=false. Elapsed: 2.383530419s Aug 24 04:02:48.008: INFO: Pod "pod-configmaps-9c0aa51b-9a83-4157-ba9f-0fa35c35a332": Phase="Pending", Reason="", readiness=false. Elapsed: 4.390027212s Aug 24 04:02:50.181: INFO: Pod "pod-configmaps-9c0aa51b-9a83-4157-ba9f-0fa35c35a332": Phase="Pending", Reason="", readiness=false. Elapsed: 6.563028976s Aug 24 04:02:52.189: INFO: Pod "pod-configmaps-9c0aa51b-9a83-4157-ba9f-0fa35c35a332": Phase="Running", Reason="", readiness=true. Elapsed: 8.570469192s Aug 24 04:02:54.197: INFO: Pod "pod-configmaps-9c0aa51b-9a83-4157-ba9f-0fa35c35a332": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.578978877s STEP: Saw pod success Aug 24 04:02:54.198: INFO: Pod "pod-configmaps-9c0aa51b-9a83-4157-ba9f-0fa35c35a332" satisfied condition "success or failure" Aug 24 04:02:54.295: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-9c0aa51b-9a83-4157-ba9f-0fa35c35a332 container configmap-volume-test: STEP: delete the pod Aug 24 04:02:54.363: INFO: Waiting for pod pod-configmaps-9c0aa51b-9a83-4157-ba9f-0fa35c35a332 to disappear Aug 24 04:02:54.373: INFO: Pod pod-configmaps-9c0aa51b-9a83-4157-ba9f-0fa35c35a332 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 24 04:02:54.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5492" for this suite. Aug 24 04:03:00.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 24 04:03:00.543: INFO: namespace configmap-5492 deletion completed in 6.160217814s • [SLOW TEST:17.070 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 24 04:03:00.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 24 04:03:00.696: INFO: Waiting up to 5m0s for pod "pod-bf23e4d1-b71f-4ee5-9789-85d2f113645e" in namespace "emptydir-1773" to be "success or failure" Aug 24 04:03:00.754: INFO: Pod "pod-bf23e4d1-b71f-4ee5-9789-85d2f113645e": Phase="Pending", Reason="", readiness=false. Elapsed: 57.475327ms Aug 24 04:03:03.034: INFO: Pod "pod-bf23e4d1-b71f-4ee5-9789-85d2f113645e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.337497208s Aug 24 04:03:05.040: INFO: Pod "pod-bf23e4d1-b71f-4ee5-9789-85d2f113645e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.343770432s Aug 24 04:03:07.104: INFO: Pod "pod-bf23e4d1-b71f-4ee5-9789-85d2f113645e": Phase="Running", Reason="", readiness=true. Elapsed: 6.408278152s Aug 24 04:03:09.109: INFO: Pod "pod-bf23e4d1-b71f-4ee5-9789-85d2f113645e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.412634547s STEP: Saw pod success Aug 24 04:03:09.109: INFO: Pod "pod-bf23e4d1-b71f-4ee5-9789-85d2f113645e" satisfied condition "success or failure" Aug 24 04:03:09.113: INFO: Trying to get logs from node iruya-worker pod pod-bf23e4d1-b71f-4ee5-9789-85d2f113645e container test-container: STEP: delete the pod Aug 24 04:03:09.149: INFO: Waiting for pod pod-bf23e4d1-b71f-4ee5-9789-85d2f113645e to disappear Aug 24 04:03:09.153: INFO: Pod pod-bf23e4d1-b71f-4ee5-9789-85d2f113645e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 24 04:03:09.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1773" for this suite. Aug 24 04:03:15.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 24 04:03:16.333: INFO: namespace emptydir-1773 deletion completed in 7.17209474s • [SLOW TEST:15.784 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 24 04:03:16.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-f81aaf44-aa3c-4650-8780-5bf6dcd5f195 STEP: Creating secret with name secret-projected-all-test-volume-7789214f-b20c-4dfd-b24f-05129d177318 STEP: Creating a pod to test Check all projections for projected volume plugin Aug 24 04:03:16.739: INFO: Waiting up to 5m0s for pod "projected-volume-1d05fde0-897e-47ac-8904-69015c807df5" in namespace "projected-7306" to be "success or failure" Aug 24 04:03:16.816: INFO: Pod "projected-volume-1d05fde0-897e-47ac-8904-69015c807df5": Phase="Pending", Reason="", readiness=false. Elapsed: 76.300906ms Aug 24 04:03:19.140: INFO: Pod "projected-volume-1d05fde0-897e-47ac-8904-69015c807df5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.400175496s Aug 24 04:03:21.147: INFO: Pod "projected-volume-1d05fde0-897e-47ac-8904-69015c807df5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.407756284s Aug 24 04:03:23.155: INFO: Pod "projected-volume-1d05fde0-897e-47ac-8904-69015c807df5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.415850191s STEP: Saw pod success Aug 24 04:03:23.156: INFO: Pod "projected-volume-1d05fde0-897e-47ac-8904-69015c807df5" satisfied condition "success or failure" Aug 24 04:03:23.161: INFO: Trying to get logs from node iruya-worker pod projected-volume-1d05fde0-897e-47ac-8904-69015c807df5 container projected-all-volume-test: STEP: delete the pod Aug 24 04:03:23.184: INFO: Waiting for pod projected-volume-1d05fde0-897e-47ac-8904-69015c807df5 to disappear Aug 24 04:03:23.204: INFO: Pod projected-volume-1d05fde0-897e-47ac-8904-69015c807df5 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 24 04:03:23.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7306" for this suite. Aug 24 04:03:29.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 24 04:03:29.370: INFO: namespace projected-7306 deletion completed in 6.15822707s • [SLOW TEST:13.036 seconds] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 24 04:03:29.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 24 04:03:29.447: INFO: Creating deployment "test-recreate-deployment" Aug 24 04:03:29.464: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 24 04:03:29.570: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Aug 24 04:03:31.659: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 24 04:03:31.702: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733838609, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733838609, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733838609, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733838609, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 04:03:33.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733838609, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733838609, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733838609, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733838609, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 24 04:03:35.708: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 24 04:03:35.723: INFO: Updating deployment test-recreate-deployment Aug 24 04:03:35.723: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 24 04:03:36.066: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-8181,SelfLink:/apis/apps/v1/namespaces/deployment-8181/deployments/test-recreate-deployment,UID:228cb805-793e-432e-8148-4648c79644a8,ResourceVersion:2279214,Generation:2,CreationTimestamp:2020-08-24 04:03:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-24 04:03:35 +0000 UTC 2020-08-24 04:03:35 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-24 04:03:35 +0000 UTC 2020-08-24 04:03:29 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Aug 24 04:03:36.400: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-8181,SelfLink:/apis/apps/v1/namespaces/deployment-8181/replicasets/test-recreate-deployment-5c8c9cc69d,UID:9f3c6e25-0888-4d20-a949-f43521fd93f7,ResourceVersion:2279211,Generation:1,CreationTimestamp:2020-08-24 04:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 228cb805-793e-432e-8148-4648c79644a8 0x95c35b7 0x95c35b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 24 04:03:36.400: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 24 04:03:36.401: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-8181,SelfLink:/apis/apps/v1/namespaces/deployment-8181/replicasets/test-recreate-deployment-6df85df6b9,UID:8ea2fc9c-58cc-4321-a6f1-89189362e06e,ResourceVersion:2279202,Generation:2,CreationTimestamp:2020-08-24 04:03:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 228cb805-793e-432e-8148-4648c79644a8 0x95c3687 0x95c3688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 24 04:03:36.448: INFO: Pod "test-recreate-deployment-5c8c9cc69d-t5sp5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-t5sp5,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-8181,SelfLink:/api/v1/namespaces/deployment-8181/pods/test-recreate-deployment-5c8c9cc69d-t5sp5,UID:f96c2e31-1c7d-4f92-ba95-2aa117ad4e4b,ResourceVersion:2279209,Generation:0,CreationTimestamp:2020-08-24 04:03:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 9f3c6e25-0888-4d20-a949-f43521fd93f7 0x8ea3077 0x8ea3078}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-z54nn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-z54nn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-z54nn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0x8ea30f0} {node.kubernetes.io/unreachable Exists NoExecute 0x8ea3110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:03:35 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 24 04:03:36.448: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8181" for this suite. Aug 24 04:03:42.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 24 04:03:42.782: INFO: namespace deployment-8181 deletion completed in 6.230734098s • [SLOW TEST:13.409 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 24 04:03:42.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-b2d08ce6-9d70-4545-bd38-99eceea2d81f STEP: Creating a pod to test consume secrets Aug 24 04:03:42.916: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cd8f10bd-6246-43c3-aea1-6a0424bb3a7a" in namespace "projected-5796" to be "success or failure" Aug 24 04:03:42.931: INFO: Pod "pod-projected-secrets-cd8f10bd-6246-43c3-aea1-6a0424bb3a7a": Phase="Pending", Reason="", readiness=false. Elapsed: 14.629207ms Aug 24 04:03:44.995: INFO: Pod "pod-projected-secrets-cd8f10bd-6246-43c3-aea1-6a0424bb3a7a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079151256s Aug 24 04:03:47.152: INFO: Pod "pod-projected-secrets-cd8f10bd-6246-43c3-aea1-6a0424bb3a7a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.236036834s Aug 24 04:03:49.194: INFO: Pod "pod-projected-secrets-cd8f10bd-6246-43c3-aea1-6a0424bb3a7a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.277675646s STEP: Saw pod success Aug 24 04:03:49.194: INFO: Pod "pod-projected-secrets-cd8f10bd-6246-43c3-aea1-6a0424bb3a7a" satisfied condition "success or failure" Aug 24 04:03:49.432: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-cd8f10bd-6246-43c3-aea1-6a0424bb3a7a container projected-secret-volume-test: STEP: delete the pod Aug 24 04:03:49.460: INFO: Waiting for pod pod-projected-secrets-cd8f10bd-6246-43c3-aea1-6a0424bb3a7a to disappear Aug 24 04:03:49.786: INFO: Pod pod-projected-secrets-cd8f10bd-6246-43c3-aea1-6a0424bb3a7a no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 24 04:03:49.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5796" for this suite. Aug 24 04:03:56.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 24 04:03:57.766: INFO: namespace projected-5796 deletion completed in 7.969830064s • [SLOW TEST:14.980 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 24 04:03:57.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 24 04:03:59.689: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 24 04:04:01.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-830" for this suite. Aug 24 04:04:07.517: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 24 04:04:07.661: INFO: namespace custom-resource-definition-830 deletion completed in 6.250629337s • [SLOW TEST:9.894 seconds] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 24 04:04:07.662: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 24 04:04:07.795: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 24 04:04:14.256: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:04:23.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-3675" for this suite.
Aug 24 04:04:33.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:04:33.360: INFO: namespace init-container-3675 deletion completed in 10.144595689s

• [SLOW TEST:19.228 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:04:33.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 04:04:35.399: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 24 04:04:35.746: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:35.943: INFO: Number of nodes with available pods: 0
Aug 24 04:04:35.943: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 04:04:37.466: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:37.918: INFO: Number of nodes with available pods: 0
Aug 24 04:04:37.918: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 04:04:37.959: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:38.262: INFO: Number of nodes with available pods: 0
Aug 24 04:04:38.262: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 04:04:38.956: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:38.963: INFO: Number of nodes with available pods: 0
Aug 24 04:04:38.963: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 04:04:40.016: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:40.429: INFO: Number of nodes with available pods: 0
Aug 24 04:04:40.429: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 04:04:40.952: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:40.958: INFO: Number of nodes with available pods: 0
Aug 24 04:04:40.958: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 04:04:42.003: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:42.062: INFO: Number of nodes with available pods: 0
Aug 24 04:04:42.062: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 04:04:43.031: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:43.038: INFO: Number of nodes with available pods: 0
Aug 24 04:04:43.038: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 04:04:43.961: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:43.968: INFO: Number of nodes with available pods: 2
Aug 24 04:04:43.969: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 24 04:04:44.333: INFO: Wrong image for pod: daemon-set-r8m7p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:44.333: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:44.603: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:45.759: INFO: Wrong image for pod: daemon-set-r8m7p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:45.759: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:45.797: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:46.674: INFO: Wrong image for pod: daemon-set-r8m7p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:46.675: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:46.884: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:47.621: INFO: Wrong image for pod: daemon-set-r8m7p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:47.621: INFO: Pod daemon-set-r8m7p is not available
Aug 24 04:04:47.621: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:47.630: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:48.611: INFO: Wrong image for pod: daemon-set-r8m7p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:48.611: INFO: Pod daemon-set-r8m7p is not available
Aug 24 04:04:48.612: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:48.622: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:49.612: INFO: Wrong image for pod: daemon-set-r8m7p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:49.612: INFO: Pod daemon-set-r8m7p is not available
Aug 24 04:04:49.612: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:49.624: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:50.656: INFO: Wrong image for pod: daemon-set-r8m7p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:50.656: INFO: Pod daemon-set-r8m7p is not available
Aug 24 04:04:50.656: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:50.939: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:51.611: INFO: Wrong image for pod: daemon-set-r8m7p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:51.612: INFO: Pod daemon-set-r8m7p is not available
Aug 24 04:04:51.612: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:51.623: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:52.612: INFO: Wrong image for pod: daemon-set-r8m7p. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:52.612: INFO: Pod daemon-set-r8m7p is not available
Aug 24 04:04:52.612: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:52.622: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:53.668: INFO: Pod daemon-set-98tzh is not available
Aug 24 04:04:53.668: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:53.704: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:54.652: INFO: Pod daemon-set-98tzh is not available
Aug 24 04:04:54.652: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:54.676: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:55.610: INFO: Pod daemon-set-98tzh is not available
Aug 24 04:04:55.610: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:55.618: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:56.962: INFO: Pod daemon-set-98tzh is not available
Aug 24 04:04:56.962: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:57.178: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:57.612: INFO: Pod daemon-set-98tzh is not available
Aug 24 04:04:57.612: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:57.623: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:58.651: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:58.856: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:04:59.610: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:04:59.619: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:05:00.613: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:05:00.613: INFO: Pod daemon-set-tqbkg is not available
Aug 24 04:05:00.623: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:05:01.610: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:05:01.610: INFO: Pod daemon-set-tqbkg is not available
Aug 24 04:05:01.671: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:05:02.613: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:05:02.614: INFO: Pod daemon-set-tqbkg is not available
Aug 24 04:05:02.636: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:05:03.612: INFO: Wrong image for pod: daemon-set-tqbkg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 24 04:05:03.613: INFO: Pod daemon-set-tqbkg is not available
Aug 24 04:05:03.622: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:05:04.610: INFO: Pod daemon-set-mncqk is not available
Aug 24 04:05:04.619: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 24 04:05:04.629: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:05:04.634: INFO: Number of nodes with available pods: 1
Aug 24 04:05:04.634: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 24 04:05:05.643: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:05:05.647: INFO: Number of nodes with available pods: 1
Aug 24 04:05:05.648: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 24 04:05:06.647: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:05:06.654: INFO: Number of nodes with available pods: 1
Aug 24 04:05:06.654: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 24 04:05:07.650: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 04:05:07.656: INFO: Number of nodes with available pods: 2
Aug 24 04:05:07.656: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-306, will wait for the garbage collector to delete the pods
Aug 24 04:05:07.750: INFO: Deleting DaemonSet.extensions daemon-set took: 11.944091ms
Aug 24 04:05:08.152: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.952491ms
Aug 24 04:05:13.459: INFO: Number of nodes with available pods: 0
Aug 24 04:05:13.459: INFO: Number of running nodes: 0, number of available pods: 0
Aug 24 04:05:13.466: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-306/daemonsets","resourceVersion":"2279718"},"items":null}

Aug 24 04:05:13.474: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-306/pods","resourceVersion":"2279718"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:05:13.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-306" for this suite.
Aug 24 04:05:21.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:05:21.663: INFO: namespace daemonsets-306 deletion completed in 8.151623568s

• [SLOW TEST:48.302 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:05:21.665: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 04:05:21.744: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9d8dff1e-346c-414d-a0a5-72c3b40b60f8" in namespace "downward-api-3646" to be "success or failure"
Aug 24 04:05:21.750: INFO: Pod "downwardapi-volume-9d8dff1e-346c-414d-a0a5-72c3b40b60f8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.700296ms
Aug 24 04:05:23.757: INFO: Pod "downwardapi-volume-9d8dff1e-346c-414d-a0a5-72c3b40b60f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012244512s
Aug 24 04:05:25.830: INFO: Pod "downwardapi-volume-9d8dff1e-346c-414d-a0a5-72c3b40b60f8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.085769545s
Aug 24 04:05:27.837: INFO: Pod "downwardapi-volume-9d8dff1e-346c-414d-a0a5-72c3b40b60f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.092078281s
STEP: Saw pod success
Aug 24 04:05:27.837: INFO: Pod "downwardapi-volume-9d8dff1e-346c-414d-a0a5-72c3b40b60f8" satisfied condition "success or failure"
Aug 24 04:05:27.841: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9d8dff1e-346c-414d-a0a5-72c3b40b60f8 container client-container: 
STEP: delete the pod
Aug 24 04:05:27.877: INFO: Waiting for pod downwardapi-volume-9d8dff1e-346c-414d-a0a5-72c3b40b60f8 to disappear
Aug 24 04:05:27.973: INFO: Pod downwardapi-volume-9d8dff1e-346c-414d-a0a5-72c3b40b60f8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:05:27.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3646" for this suite.
Aug 24 04:05:36.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:05:36.153: INFO: namespace downward-api-3646 deletion completed in 8.168970871s

• [SLOW TEST:14.488 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:05:36.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Aug 24 04:05:36.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7830 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Aug 24 04:05:44.764: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0824 04:05:44.603050      41 log.go:172] (0x24a4f50) (0x24a4fc0) Create stream\nI0824 04:05:44.606248      41 log.go:172] (0x24a4f50) (0x24a4fc0) Stream added, broadcasting: 1\nI0824 04:05:44.619601      41 log.go:172] (0x24a4f50) Reply frame received for 1\nI0824 04:05:44.620527      41 log.go:172] (0x24a4f50) (0x24a5180) Create stream\nI0824 04:05:44.620636      41 log.go:172] (0x24a4f50) (0x24a5180) Stream added, broadcasting: 3\nI0824 04:05:44.622566      41 log.go:172] (0x24a4f50) Reply frame received for 3\nI0824 04:05:44.623171      41 log.go:172] (0x24a4f50) (0x266a1c0) Create stream\nI0824 04:05:44.623312      41 log.go:172] (0x24a4f50) (0x266a1c0) Stream added, broadcasting: 5\nI0824 04:05:44.625436      41 log.go:172] (0x24a4f50) Reply frame received for 5\nI0824 04:05:44.625983      41 log.go:172] (0x24a4f50) (0x29b2a10) Create stream\nI0824 04:05:44.626163      41 log.go:172] (0x24a4f50) (0x29b2a10) Stream added, broadcasting: 7\nI0824 04:05:44.628120      41 log.go:172] (0x24a4f50) Reply frame received for 7\nI0824 04:05:44.633942      41 log.go:172] (0x24a5180) (3) Writing data frame\nI0824 04:05:44.634968      41 log.go:172] (0x24a5180) (3) Writing data frame\nI0824 04:05:44.636631      41 log.go:172] (0x24a4f50) Data frame received for 5\nI0824 04:05:44.636953      41 log.go:172] (0x266a1c0) (5) Data frame handling\nI0824 04:05:44.637517      41 log.go:172] (0x266a1c0) (5) Data frame sent\nI0824 04:05:44.638137      41 log.go:172] (0x24a4f50) Data frame received for 5\nI0824 04:05:44.638265      41 log.go:172] (0x266a1c0) (5) Data frame handling\nI0824 04:05:44.638394      41 log.go:172] (0x266a1c0) (5) Data frame sent\nI0824 04:05:44.694054      41 log.go:172] (0x24a4f50) Data frame received for 5\nI0824 04:05:44.694313      41 log.go:172] (0x266a1c0) (5) Data frame handling\nI0824 04:05:44.694638      41 log.go:172] (0x24a4f50) Data frame received for 7\nI0824 04:05:44.694836      41 log.go:172] (0x29b2a10) (7) Data frame handling\nI0824 04:05:44.695001      41 log.go:172] (0x24a4f50) Data frame received for 1\nI0824 04:05:44.695988      41 log.go:172] (0x24a4f50) (0x24a5180) Stream removed, broadcasting: 3\nI0824 04:05:44.696374      41 log.go:172] (0x24a4fc0) (1) Data frame handling\nI0824 04:05:44.696522      41 log.go:172] (0x24a4fc0) (1) Data frame sent\nI0824 04:05:44.697052      41 log.go:172] (0x24a4f50) (0x24a4fc0) Stream removed, broadcasting: 1\nI0824 04:05:44.697784      41 log.go:172] (0x24a4f50) Go away received\nI0824 04:05:44.700055      41 log.go:172] (0x24a4f50) (0x24a4fc0) Stream removed, broadcasting: 1\nI0824 04:05:44.700273      41 log.go:172] (0x24a4f50) (0x24a5180) Stream removed, broadcasting: 3\nI0824 04:05:44.700366      41 log.go:172] (0x24a4f50) (0x266a1c0) Stream removed, broadcasting: 5\nI0824 04:05:44.700494      41 log.go:172] (0x24a4f50) (0x29b2a10) Stream removed, broadcasting: 7\n"
Aug 24 04:05:44.765: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:05:46.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7830" for this suite.
Aug 24 04:05:52.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:05:52.996: INFO: namespace kubectl-7830 deletion completed in 6.211904384s

• [SLOW TEST:16.842 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:05:52.998: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:05:59.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2968" for this suite.
Aug 24 04:06:05.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:06:05.424: INFO: namespace emptydir-wrapper-2968 deletion completed in 6.186146619s

• [SLOW TEST:12.426 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:06:05.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3812
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 24 04:06:05.647: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 24 04:06:29.915: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.124:8080/dial?request=hostName&protocol=http&host=10.244.2.123&port=8080&tries=1'] Namespace:pod-network-test-3812 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 04:06:29.915: INFO: >>> kubeConfig: /root/.kube/config
I0824 04:06:30.031568       7 log.go:172] (0x6db32d0) (0x6db3340) Create stream
I0824 04:06:30.032141       7 log.go:172] (0x6db32d0) (0x6db3340) Stream added, broadcasting: 1
I0824 04:06:30.054277       7 log.go:172] (0x6db32d0) Reply frame received for 1
I0824 04:06:30.055156       7 log.go:172] (0x6db32d0) (0x7bfc000) Create stream
I0824 04:06:30.055268       7 log.go:172] (0x6db32d0) (0x7bfc000) Stream added, broadcasting: 3
I0824 04:06:30.057429       7 log.go:172] (0x6db32d0) Reply frame received for 3
I0824 04:06:30.057770       7 log.go:172] (0x6db32d0) (0x7bfc070) Create stream
I0824 04:06:30.057878       7 log.go:172] (0x6db32d0) (0x7bfc070) Stream added, broadcasting: 5
I0824 04:06:30.059397       7 log.go:172] (0x6db32d0) Reply frame received for 5
I0824 04:06:30.138088       7 log.go:172] (0x6db32d0) Data frame received for 3
I0824 04:06:30.138310       7 log.go:172] (0x6db32d0) Data frame received for 1
I0824 04:06:30.138600       7 log.go:172] (0x6db32d0) Data frame received for 5
I0824 04:06:30.138714       7 log.go:172] (0x6db3340) (1) Data frame handling
I0824 04:06:30.139105       7 log.go:172] (0x7bfc000) (3) Data frame handling
I0824 04:06:30.139364       7 log.go:172] (0x7bfc070) (5) Data frame handling
I0824 04:06:30.140002       7 log.go:172] (0x7bfc000) (3) Data frame sent
I0824 04:06:30.140212       7 log.go:172] (0x6db32d0) Data frame received for 3
I0824 04:06:30.140336       7 log.go:172] (0x7bfc000) (3) Data frame handling
I0824 04:06:30.140542       7 log.go:172] (0x6db3340) (1) Data frame sent
I0824 04:06:30.142174       7 log.go:172] (0x6db32d0) (0x6db3340) Stream removed, broadcasting: 1
I0824 04:06:30.142674       7 log.go:172] (0x6db32d0) Go away received
I0824 04:06:30.143609       7 log.go:172] (0x6db32d0) (0x6db3340) Stream removed, broadcasting: 1
I0824 04:06:30.143960       7 log.go:172] (0x6db32d0) (0x7bfc000) Stream removed, broadcasting: 3
I0824 04:06:30.144583       7 log.go:172] (0x6db32d0) (0x7bfc070) Stream removed, broadcasting: 5
Aug 24 04:06:30.145: INFO: Waiting for endpoints: map[]
Aug 24 04:06:30.150: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.124:8080/dial?request=hostName&protocol=http&host=10.244.1.146&port=8080&tries=1'] Namespace:pod-network-test-3812 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 04:06:30.150: INFO: >>> kubeConfig: /root/.kube/config
I0824 04:06:30.299674       7 log.go:172] (0x7e0e5b0) (0x7e0e620) Create stream
I0824 04:06:30.299932       7 log.go:172] (0x7e0e5b0) (0x7e0e620) Stream added, broadcasting: 1
I0824 04:06:30.308948       7 log.go:172] (0x7e0e5b0) Reply frame received for 1
I0824 04:06:30.309140       7 log.go:172] (0x7e0e5b0) (0x6cb8ee0) Create stream
I0824 04:06:30.309219       7 log.go:172] (0x7e0e5b0) (0x6cb8ee0) Stream added, broadcasting: 3
I0824 04:06:30.311073       7 log.go:172] (0x7e0e5b0) Reply frame received for 3
I0824 04:06:30.311394       7 log.go:172] (0x7e0e5b0) (0x7e0e690) Create stream
I0824 04:06:30.311523       7 log.go:172] (0x7e0e5b0) (0x7e0e690) Stream added, broadcasting: 5
I0824 04:06:30.313364       7 log.go:172] (0x7e0e5b0) Reply frame received for 5
I0824 04:06:30.381410       7 log.go:172] (0x7e0e5b0) Data frame received for 3
I0824 04:06:30.381632       7 log.go:172] (0x6cb8ee0) (3) Data frame handling
I0824 04:06:30.381775       7 log.go:172] (0x6cb8ee0) (3) Data frame sent
I0824 04:06:30.381889       7 log.go:172] (0x7e0e5b0) Data frame received for 3
I0824 04:06:30.382025       7 log.go:172] (0x6cb8ee0) (3) Data frame handling
I0824 04:06:30.382200       7 log.go:172] (0x7e0e5b0) Data frame received for 5
I0824 04:06:30.382371       7 log.go:172] (0x7e0e690) (5) Data frame handling
I0824 04:06:30.383451       7 log.go:172] (0x7e0e5b0) Data frame received for 1
I0824 04:06:30.383564       7 log.go:172] (0x7e0e620) (1) Data frame handling
I0824 04:06:30.383691       7 log.go:172] (0x7e0e620) (1) Data frame sent
I0824 04:06:30.383850       7 log.go:172] (0x7e0e5b0) (0x7e0e620) Stream removed, broadcasting: 1
I0824 04:06:30.384019       7 log.go:172] (0x7e0e5b0) Go away received
I0824 04:06:30.384342       7 log.go:172] (0x7e0e5b0) (0x7e0e620) Stream removed, broadcasting: 1
I0824 04:06:30.384447       7 log.go:172] (0x7e0e5b0) (0x6cb8ee0) Stream removed, broadcasting: 3
I0824 04:06:30.384522       7 log.go:172] (0x7e0e5b0) (0x7e0e690) Stream removed, broadcasting: 5
Aug 24 04:06:30.384: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:06:30.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3812" for this suite.
Aug 24 04:06:52.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:06:52.550: INFO: namespace pod-network-test-3812 deletion completed in 22.154355979s

• [SLOW TEST:47.125 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:06:52.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Aug 24 04:06:52.685: INFO: Waiting up to 5m0s for pod "pod-ef58dc39-05c4-48ce-85b4-b7b661de8b79" in namespace "emptydir-8917" to be "success or failure"
Aug 24 04:06:52.705: INFO: Pod "pod-ef58dc39-05c4-48ce-85b4-b7b661de8b79": Phase="Pending", Reason="", readiness=false. Elapsed: 20.40961ms
Aug 24 04:06:54.712: INFO: Pod "pod-ef58dc39-05c4-48ce-85b4-b7b661de8b79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027132477s
Aug 24 04:06:56.719: INFO: Pod "pod-ef58dc39-05c4-48ce-85b4-b7b661de8b79": Phase="Running", Reason="", readiness=true. Elapsed: 4.034611709s
Aug 24 04:06:58.727: INFO: Pod "pod-ef58dc39-05c4-48ce-85b4-b7b661de8b79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.042328837s
STEP: Saw pod success
Aug 24 04:06:58.727: INFO: Pod "pod-ef58dc39-05c4-48ce-85b4-b7b661de8b79" satisfied condition "success or failure"
Aug 24 04:06:58.733: INFO: Trying to get logs from node iruya-worker pod pod-ef58dc39-05c4-48ce-85b4-b7b661de8b79 container test-container: 
STEP: delete the pod
Aug 24 04:06:58.759: INFO: Waiting for pod pod-ef58dc39-05c4-48ce-85b4-b7b661de8b79 to disappear
Aug 24 04:06:58.766: INFO: Pod pod-ef58dc39-05c4-48ce-85b4-b7b661de8b79 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:06:58.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8917" for this suite.
Aug 24 04:07:04.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:07:04.903: INFO: namespace emptydir-8917 deletion completed in 6.1288068s

• [SLOW TEST:12.350 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:07:04.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:07:10.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2393" for this suite.
Aug 24 04:07:16.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:07:16.883: INFO: namespace watch-2393 deletion completed in 6.234392558s

• [SLOW TEST:11.977 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:07:16.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0824 04:07:57.232347       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 24 04:07:57.233: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:07:57.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2017" for this suite.
Aug 24 04:08:13.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:08:13.382: INFO: namespace gc-2017 deletion completed in 16.141498214s

• [SLOW TEST:56.496 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:08:13.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 04:08:13.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:08:17.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6057" for this suite.
Aug 24 04:08:57.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:08:57.793: INFO: namespace pods-6057 deletion completed in 40.226946602s

• [SLOW TEST:44.410 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:08:57.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-6a161bb5-6e9f-4b8e-a25a-812bb11b2d7a
STEP: Creating a pod to test consume secrets
Aug 24 04:08:58.050: INFO: Waiting up to 5m0s for pod "pod-secrets-c0710334-24d8-4507-874c-b15b3bc66eb1" in namespace "secrets-6573" to be "success or failure"
Aug 24 04:08:58.102: INFO: Pod "pod-secrets-c0710334-24d8-4507-874c-b15b3bc66eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 51.815211ms
Aug 24 04:09:00.107: INFO: Pod "pod-secrets-c0710334-24d8-4507-874c-b15b3bc66eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056719351s
Aug 24 04:09:02.155: INFO: Pod "pod-secrets-c0710334-24d8-4507-874c-b15b3bc66eb1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10442412s
Aug 24 04:09:04.335: INFO: Pod "pod-secrets-c0710334-24d8-4507-874c-b15b3bc66eb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.284599266s
STEP: Saw pod success
Aug 24 04:09:04.335: INFO: Pod "pod-secrets-c0710334-24d8-4507-874c-b15b3bc66eb1" satisfied condition "success or failure"
Aug 24 04:09:04.503: INFO: Trying to get logs from node iruya-worker pod pod-secrets-c0710334-24d8-4507-874c-b15b3bc66eb1 container secret-volume-test: 
STEP: delete the pod
Aug 24 04:09:04.527: INFO: Waiting for pod pod-secrets-c0710334-24d8-4507-874c-b15b3bc66eb1 to disappear
Aug 24 04:09:04.544: INFO: Pod pod-secrets-c0710334-24d8-4507-874c-b15b3bc66eb1 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:09:04.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6573" for this suite.
Aug 24 04:09:10.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:09:10.662: INFO: namespace secrets-6573 deletion completed in 6.109912947s

• [SLOW TEST:12.868 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:09:10.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-1913
I0824 04:09:10.825966       7 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1913, replica count: 1
I0824 04:09:11.877925       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0824 04:09:12.878769       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0824 04:09:13.879392       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0824 04:09:14.879998       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0824 04:09:15.880573       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0824 04:09:16.881533       7 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 24 04:09:17.055: INFO: Created: latency-svc-9zghl
Aug 24 04:09:17.058: INFO: Got endpoints: latency-svc-9zghl [75.12546ms]
Aug 24 04:09:17.153: INFO: Created: latency-svc-b7qnz
Aug 24 04:09:17.161: INFO: Got endpoints: latency-svc-b7qnz [102.754425ms]
Aug 24 04:09:17.234: INFO: Created: latency-svc-nf62l
Aug 24 04:09:17.236: INFO: Got endpoints: latency-svc-nf62l [177.226388ms]
Aug 24 04:09:17.295: INFO: Created: latency-svc-jfwnq
Aug 24 04:09:17.330: INFO: Got endpoints: latency-svc-jfwnq [271.203081ms]
Aug 24 04:09:17.407: INFO: Created: latency-svc-sdv25
Aug 24 04:09:17.409: INFO: Got endpoints: latency-svc-sdv25 [350.47444ms]
Aug 24 04:09:17.448: INFO: Created: latency-svc-sh2qf
Aug 24 04:09:17.464: INFO: Got endpoints: latency-svc-sh2qf [405.009787ms]
Aug 24 04:09:17.482: INFO: Created: latency-svc-w6lb2
Aug 24 04:09:17.499: INFO: Got endpoints: latency-svc-w6lb2 [440.365089ms]
Aug 24 04:09:17.556: INFO: Created: latency-svc-9s4zw
Aug 24 04:09:17.559: INFO: Got endpoints: latency-svc-9s4zw [500.603756ms]
Aug 24 04:09:17.616: INFO: Created: latency-svc-xpbls
Aug 24 04:09:17.638: INFO: Got endpoints: latency-svc-xpbls [579.254367ms]
Aug 24 04:09:17.749: INFO: Created: latency-svc-jshpm
Aug 24 04:09:17.751: INFO: Got endpoints: latency-svc-jshpm [691.958934ms]
Aug 24 04:09:17.795: INFO: Created: latency-svc-8gllr
Aug 24 04:09:17.800: INFO: Got endpoints: latency-svc-8gllr [740.913872ms]
Aug 24 04:09:17.820: INFO: Created: latency-svc-8pck2
Aug 24 04:09:17.837: INFO: Got endpoints: latency-svc-8pck2 [777.809297ms]
Aug 24 04:09:17.896: INFO: Created: latency-svc-xddcw
Aug 24 04:09:17.927: INFO: Got endpoints: latency-svc-xddcw [867.962646ms]
Aug 24 04:09:17.970: INFO: Created: latency-svc-gg7rn
Aug 24 04:09:18.047: INFO: Got endpoints: latency-svc-gg7rn [988.357183ms]
Aug 24 04:09:18.049: INFO: Created: latency-svc-4t768
Aug 24 04:09:18.100: INFO: Got endpoints: latency-svc-4t768 [1.041080388s]
Aug 24 04:09:18.186: INFO: Created: latency-svc-6hhp4
Aug 24 04:09:18.189: INFO: Got endpoints: latency-svc-6hhp4 [1.130659958s]
Aug 24 04:09:18.235: INFO: Created: latency-svc-5b24b
Aug 24 04:09:18.251: INFO: Got endpoints: latency-svc-5b24b [1.089069162s]
Aug 24 04:09:18.271: INFO: Created: latency-svc-6ptw7
Aug 24 04:09:18.400: INFO: Got endpoints: latency-svc-6ptw7 [1.164500699s]
Aug 24 04:09:18.408: INFO: Created: latency-svc-6tgvp
Aug 24 04:09:18.457: INFO: Got endpoints: latency-svc-6tgvp [1.126496728s]
Aug 24 04:09:18.545: INFO: Created: latency-svc-r28jr
Aug 24 04:09:18.551: INFO: Got endpoints: latency-svc-r28jr [1.141832147s]
Aug 24 04:09:18.590: INFO: Created: latency-svc-zbtnp
Aug 24 04:09:18.605: INFO: Got endpoints: latency-svc-zbtnp [1.141418086s]
Aug 24 04:09:18.643: INFO: Created: latency-svc-5qcnt
Aug 24 04:09:18.736: INFO: Got endpoints: latency-svc-5qcnt [1.236303801s]
Aug 24 04:09:19.245: INFO: Created: latency-svc-2mqhq
Aug 24 04:09:19.307: INFO: Got endpoints: latency-svc-2mqhq [1.747373279s]
Aug 24 04:09:19.558: INFO: Created: latency-svc-64cjv
Aug 24 04:09:19.577: INFO: Got endpoints: latency-svc-64cjv [1.938105733s]
Aug 24 04:09:19.850: INFO: Created: latency-svc-x6l2t
Aug 24 04:09:19.854: INFO: Got endpoints: latency-svc-x6l2t [2.102714488s]
Aug 24 04:09:20.283: INFO: Created: latency-svc-4dzhl
Aug 24 04:09:20.527: INFO: Got endpoints: latency-svc-4dzhl [2.726715334s]
Aug 24 04:09:20.531: INFO: Created: latency-svc-qw4zb
Aug 24 04:09:20.539: INFO: Got endpoints: latency-svc-qw4zb [2.701496719s]
Aug 24 04:09:20.930: INFO: Created: latency-svc-nr2jg
Aug 24 04:09:20.969: INFO: Got endpoints: latency-svc-nr2jg [3.041784313s]
Aug 24 04:09:21.681: INFO: Created: latency-svc-wcxr4
Aug 24 04:09:21.886: INFO: Got endpoints: latency-svc-wcxr4 [3.838274936s]
Aug 24 04:09:21.946: INFO: Created: latency-svc-t57dp
Aug 24 04:09:22.109: INFO: Got endpoints: latency-svc-t57dp [4.008939897s]
Aug 24 04:09:22.203: INFO: Created: latency-svc-rpq62
Aug 24 04:09:22.287: INFO: Got endpoints: latency-svc-rpq62 [4.097700832s]
Aug 24 04:09:22.342: INFO: Created: latency-svc-q9ctp
Aug 24 04:09:22.365: INFO: Got endpoints: latency-svc-q9ctp [4.114201103s]
Aug 24 04:09:22.462: INFO: Created: latency-svc-n4mzd
Aug 24 04:09:22.473: INFO: Got endpoints: latency-svc-n4mzd [4.072038838s]
Aug 24 04:09:22.550: INFO: Created: latency-svc-zd4th
Aug 24 04:09:22.611: INFO: Got endpoints: latency-svc-zd4th [4.153672075s]
Aug 24 04:09:22.670: INFO: Created: latency-svc-5m6x8
Aug 24 04:09:22.695: INFO: Got endpoints: latency-svc-5m6x8 [4.144056361s]
Aug 24 04:09:22.780: INFO: Created: latency-svc-7jwxk
Aug 24 04:09:22.781: INFO: Got endpoints: latency-svc-7jwxk [4.175837562s]
Aug 24 04:09:22.832: INFO: Created: latency-svc-nb26m
Aug 24 04:09:22.852: INFO: Got endpoints: latency-svc-nb26m [4.11549216s]
Aug 24 04:09:22.934: INFO: Created: latency-svc-k2pfd
Aug 24 04:09:22.941: INFO: Got endpoints: latency-svc-k2pfd [3.634246201s]
Aug 24 04:09:22.966: INFO: Created: latency-svc-fmkdk
Aug 24 04:09:22.984: INFO: Got endpoints: latency-svc-fmkdk [3.407531756s]
Aug 24 04:09:23.029: INFO: Created: latency-svc-kpmpv
Aug 24 04:09:23.101: INFO: Got endpoints: latency-svc-kpmpv [3.247025369s]
Aug 24 04:09:23.103: INFO: Created: latency-svc-hh42r
Aug 24 04:09:23.116: INFO: Got endpoints: latency-svc-hh42r [2.58868379s]
Aug 24 04:09:23.335: INFO: Created: latency-svc-xfvwz
Aug 24 04:09:23.374: INFO: Got endpoints: latency-svc-xfvwz [2.83542325s]
Aug 24 04:09:23.485: INFO: Created: latency-svc-gdbnb
Aug 24 04:09:23.488: INFO: Got endpoints: latency-svc-gdbnb [2.518779462s]
Aug 24 04:09:23.534: INFO: Created: latency-svc-jzrjf
Aug 24 04:09:23.542: INFO: Got endpoints: latency-svc-jzrjf [1.656193254s]
Aug 24 04:09:23.560: INFO: Created: latency-svc-wdqww
Aug 24 04:09:23.568: INFO: Got endpoints: latency-svc-wdqww [1.458438161s]
Aug 24 04:09:23.628: INFO: Created: latency-svc-q8pzk
Aug 24 04:09:23.632: INFO: Got endpoints: latency-svc-q8pzk [1.344040556s]
Aug 24 04:09:23.699: INFO: Created: latency-svc-cw8dg
Aug 24 04:09:23.711: INFO: Got endpoints: latency-svc-cw8dg [1.345722029s]
Aug 24 04:09:23.760: INFO: Created: latency-svc-qtqm5
Aug 24 04:09:23.765: INFO: Got endpoints: latency-svc-qtqm5 [1.29239997s]
Aug 24 04:09:23.798: INFO: Created: latency-svc-vld9f
Aug 24 04:09:23.827: INFO: Got endpoints: latency-svc-vld9f [1.215595298s]
Aug 24 04:09:23.910: INFO: Created: latency-svc-988tk
Aug 24 04:09:23.916: INFO: Got endpoints: latency-svc-988tk [1.220806311s]
Aug 24 04:09:23.932: INFO: Created: latency-svc-5r692
Aug 24 04:09:23.940: INFO: Got endpoints: latency-svc-5r692 [1.158859482s]
Aug 24 04:09:23.956: INFO: Created: latency-svc-brlb6
Aug 24 04:09:23.978: INFO: Got endpoints: latency-svc-brlb6 [1.126337997s]
Aug 24 04:09:24.008: INFO: Created: latency-svc-hrx95
Aug 24 04:09:24.049: INFO: Got endpoints: latency-svc-hrx95 [1.107276789s]
Aug 24 04:09:24.082: INFO: Created: latency-svc-stbdd
Aug 24 04:09:24.115: INFO: Got endpoints: latency-svc-stbdd [1.130655159s]
Aug 24 04:09:24.242: INFO: Created: latency-svc-m2d5s
Aug 24 04:09:24.272: INFO: Got endpoints: latency-svc-m2d5s [1.170394974s]
Aug 24 04:09:24.335: INFO: Created: latency-svc-wvz42
Aug 24 04:09:24.371: INFO: Got endpoints: latency-svc-wvz42 [1.254311721s]
Aug 24 04:09:24.386: INFO: Created: latency-svc-dvm5b
Aug 24 04:09:24.398: INFO: Got endpoints: latency-svc-dvm5b [1.023237518s]
Aug 24 04:09:24.423: INFO: Created: latency-svc-shsvh
Aug 24 04:09:24.434: INFO: Got endpoints: latency-svc-shsvh [946.21967ms]
Aug 24 04:09:24.455: INFO: Created: latency-svc-s7ct8
Aug 24 04:09:24.472: INFO: Got endpoints: latency-svc-s7ct8 [929.065191ms]
Aug 24 04:09:24.538: INFO: Created: latency-svc-nj7k7
Aug 24 04:09:24.541: INFO: Got endpoints: latency-svc-nj7k7 [972.848234ms]
Aug 24 04:09:24.750: INFO: Created: latency-svc-mpjkc
Aug 24 04:09:24.751: INFO: Got endpoints: latency-svc-mpjkc [1.119641953s]
Aug 24 04:09:25.266: INFO: Created: latency-svc-8xdrg
Aug 24 04:09:25.288: INFO: Got endpoints: latency-svc-8xdrg [1.57675142s]
Aug 24 04:09:25.425: INFO: Created: latency-svc-zk6v5
Aug 24 04:09:25.436: INFO: Got endpoints: latency-svc-zk6v5 [1.670200762s]
Aug 24 04:09:25.523: INFO: Created: latency-svc-fjsf6
Aug 24 04:09:25.629: INFO: Got endpoints: latency-svc-fjsf6 [1.802204089s]
Aug 24 04:09:25.815: INFO: Created: latency-svc-ckl4p
Aug 24 04:09:25.821: INFO: Got endpoints: latency-svc-ckl4p [1.904535338s]
Aug 24 04:09:26.032: INFO: Created: latency-svc-rwfgw
Aug 24 04:09:26.060: INFO: Got endpoints: latency-svc-rwfgw [2.119117375s]
Aug 24 04:09:26.123: INFO: Created: latency-svc-qgzpq
Aug 24 04:09:26.126: INFO: Got endpoints: latency-svc-qgzpq [2.147357678s]
Aug 24 04:09:26.231: INFO: Created: latency-svc-rm855
Aug 24 04:09:26.233: INFO: Got endpoints: latency-svc-rm855 [2.183674801s]
Aug 24 04:09:26.314: INFO: Created: latency-svc-hxxbb
Aug 24 04:09:26.326: INFO: Got endpoints: latency-svc-hxxbb [2.210289483s]
Aug 24 04:09:26.420: INFO: Created: latency-svc-mf7rl
Aug 24 04:09:26.439: INFO: Got endpoints: latency-svc-mf7rl [2.167407966s]
Aug 24 04:09:26.478: INFO: Created: latency-svc-5bgq2
Aug 24 04:09:26.493: INFO: Got endpoints: latency-svc-5bgq2 [2.122100808s]
Aug 24 04:09:26.587: INFO: Created: latency-svc-hpvkq
Aug 24 04:09:26.588: INFO: Got endpoints: latency-svc-hpvkq [2.190007996s]
Aug 24 04:09:26.641: INFO: Created: latency-svc-h6bt5
Aug 24 04:09:26.649: INFO: Got endpoints: latency-svc-h6bt5 [2.214880979s]
Aug 24 04:09:26.787: INFO: Created: latency-svc-lnpfz
Aug 24 04:09:26.794: INFO: Got endpoints: latency-svc-lnpfz [2.321565441s]
Aug 24 04:09:26.819: INFO: Created: latency-svc-dfc2g
Aug 24 04:09:26.823: INFO: Got endpoints: latency-svc-dfc2g [2.282265747s]
Aug 24 04:09:26.843: INFO: Created: latency-svc-zb59c
Aug 24 04:09:26.905: INFO: Got endpoints: latency-svc-zb59c [2.152898556s]
Aug 24 04:09:26.931: INFO: Created: latency-svc-sm6f7
Aug 24 04:09:26.945: INFO: Got endpoints: latency-svc-sm6f7 [1.656103565s]
Aug 24 04:09:26.961: INFO: Created: latency-svc-7928h
Aug 24 04:09:26.975: INFO: Got endpoints: latency-svc-7928h [1.53882917s]
Aug 24 04:09:26.992: INFO: Created: latency-svc-wqv4g
Aug 24 04:09:27.085: INFO: Got endpoints: latency-svc-wqv4g [1.455232137s]
Aug 24 04:09:27.132: INFO: Created: latency-svc-2xmmf
Aug 24 04:09:27.144: INFO: Got endpoints: latency-svc-2xmmf [1.322776195s]
Aug 24 04:09:27.167: INFO: Created: latency-svc-b4v5g
Aug 24 04:09:27.230: INFO: Got endpoints: latency-svc-b4v5g [1.169910211s]
Aug 24 04:09:27.234: INFO: Created: latency-svc-t28kr
Aug 24 04:09:27.253: INFO: Got endpoints: latency-svc-t28kr [1.126655618s]
Aug 24 04:09:27.293: INFO: Created: latency-svc-5d429
Aug 24 04:09:27.301: INFO: Got endpoints: latency-svc-5d429 [1.067707746s]
Aug 24 04:09:27.327: INFO: Created: latency-svc-v9s98
Aug 24 04:09:27.404: INFO: Got endpoints: latency-svc-v9s98 [1.077592059s]
Aug 24 04:09:27.431: INFO: Created: latency-svc-wpjds
Aug 24 04:09:27.446: INFO: Got endpoints: latency-svc-wpjds [1.006269353s]
Aug 24 04:09:27.473: INFO: Created: latency-svc-7tsrb
Aug 24 04:09:27.539: INFO: Got endpoints: latency-svc-7tsrb [1.045661198s]
Aug 24 04:09:27.579: INFO: Created: latency-svc-qssrz
Aug 24 04:09:27.596: INFO: Got endpoints: latency-svc-qssrz [1.007945998s]
Aug 24 04:09:27.616: INFO: Created: latency-svc-xdtdp
Aug 24 04:09:27.633: INFO: Got endpoints: latency-svc-xdtdp [983.83552ms]
Aug 24 04:09:27.720: INFO: Created: latency-svc-w7blm
Aug 24 04:09:27.721: INFO: Got endpoints: latency-svc-w7blm [927.673303ms]
Aug 24 04:09:27.779: INFO: Created: latency-svc-ffcqn
Aug 24 04:09:27.810: INFO: Got endpoints: latency-svc-ffcqn [986.752291ms]
Aug 24 04:09:27.874: INFO: Created: latency-svc-c8pqs
Aug 24 04:09:27.892: INFO: Got endpoints: latency-svc-c8pqs [986.735757ms]
Aug 24 04:09:28.024: INFO: Created: latency-svc-4m2f9
Aug 24 04:09:28.042: INFO: Got endpoints: latency-svc-4m2f9 [1.09751379s]
Aug 24 04:09:28.067: INFO: Created: latency-svc-j4vdb
Aug 24 04:09:28.084: INFO: Got endpoints: latency-svc-j4vdb [1.10908048s]
Aug 24 04:09:28.606: INFO: Created: latency-svc-8rv8v
Aug 24 04:09:28.608: INFO: Got endpoints: latency-svc-8rv8v [1.523313908s]
Aug 24 04:09:29.211: INFO: Created: latency-svc-8zfqz
Aug 24 04:09:29.223: INFO: Got endpoints: latency-svc-8zfqz [2.078742994s]
Aug 24 04:09:29.253: INFO: Created: latency-svc-vr24m
Aug 24 04:09:29.275: INFO: Got endpoints: latency-svc-vr24m [2.044593446s]
Aug 24 04:09:29.330: INFO: Created: latency-svc-bbrqr
Aug 24 04:09:29.344: INFO: Got endpoints: latency-svc-bbrqr [2.091075848s]
Aug 24 04:09:29.368: INFO: Created: latency-svc-72rcb
Aug 24 04:09:29.381: INFO: Got endpoints: latency-svc-72rcb [2.080284473s]
Aug 24 04:09:29.410: INFO: Created: latency-svc-vrrkd
Aug 24 04:09:29.423: INFO: Got endpoints: latency-svc-vrrkd [2.018685864s]
Aug 24 04:09:29.528: INFO: Created: latency-svc-s6qjq
Aug 24 04:09:29.543: INFO: Got endpoints: latency-svc-s6qjq [2.096748136s]
Aug 24 04:09:29.577: INFO: Created: latency-svc-gsvl4
Aug 24 04:09:29.604: INFO: Got endpoints: latency-svc-gsvl4 [2.06478282s]
Aug 24 04:09:29.682: INFO: Created: latency-svc-2pvbs
Aug 24 04:09:29.707: INFO: Got endpoints: latency-svc-2pvbs [2.110401684s]
Aug 24 04:09:29.743: INFO: Created: latency-svc-q7sh5
Aug 24 04:09:29.766: INFO: Got endpoints: latency-svc-q7sh5 [2.132797797s]
Aug 24 04:09:29.874: INFO: Created: latency-svc-mz64v
Aug 24 04:09:29.876: INFO: Got endpoints: latency-svc-mz64v [2.154259883s]
Aug 24 04:09:29.914: INFO: Created: latency-svc-r7k2b
Aug 24 04:09:29.929: INFO: Got endpoints: latency-svc-r7k2b [2.118831339s]
Aug 24 04:09:30.066: INFO: Created: latency-svc-57fwq
Aug 24 04:09:30.067: INFO: Got endpoints: latency-svc-57fwq [2.175502203s]
Aug 24 04:09:30.217: INFO: Created: latency-svc-k9hp4
Aug 24 04:09:30.230: INFO: Got endpoints: latency-svc-k9hp4 [2.187158117s]
Aug 24 04:09:30.290: INFO: Created: latency-svc-kdddz
Aug 24 04:09:30.295: INFO: Got endpoints: latency-svc-kdddz [2.210997907s]
Aug 24 04:09:30.390: INFO: Created: latency-svc-qzl4s
Aug 24 04:09:30.391: INFO: Got endpoints: latency-svc-qzl4s [1.782126943s]
Aug 24 04:09:30.521: INFO: Created: latency-svc-9rfn5
Aug 24 04:09:30.568: INFO: Got endpoints: latency-svc-9rfn5 [1.344137508s]
Aug 24 04:09:30.613: INFO: Created: latency-svc-g4n89
Aug 24 04:09:30.712: INFO: Got endpoints: latency-svc-g4n89 [1.437474016s]
Aug 24 04:09:30.724: INFO: Created: latency-svc-crpv8
Aug 24 04:09:30.781: INFO: Got endpoints: latency-svc-crpv8 [1.436991573s]
Aug 24 04:09:30.898: INFO: Created: latency-svc-6xnnw
Aug 24 04:09:30.910: INFO: Got endpoints: latency-svc-6xnnw [1.528195052s]
Aug 24 04:09:30.937: INFO: Created: latency-svc-gljn6
Aug 24 04:09:30.972: INFO: Got endpoints: latency-svc-gljn6 [1.548940331s]
Aug 24 04:09:31.079: INFO: Created: latency-svc-twdbw
Aug 24 04:09:31.097: INFO: Got endpoints: latency-svc-twdbw [1.554246162s]
Aug 24 04:09:31.119: INFO: Created: latency-svc-mjsdj
Aug 24 04:09:31.133: INFO: Got endpoints: latency-svc-mjsdj [1.528649129s]
Aug 24 04:09:31.157: INFO: Created: latency-svc-w2jck
Aug 24 04:09:31.233: INFO: Got endpoints: latency-svc-w2jck [1.525914173s]
Aug 24 04:09:31.285: INFO: Created: latency-svc-82smt
Aug 24 04:09:31.302: INFO: Got endpoints: latency-svc-82smt [1.535781616s]
Aug 24 04:09:31.378: INFO: Created: latency-svc-9wkv4
Aug 24 04:09:31.406: INFO: Got endpoints: latency-svc-9wkv4 [1.52989975s]
Aug 24 04:09:31.448: INFO: Created: latency-svc-fdn6r
Aug 24 04:09:31.482: INFO: Got endpoints: latency-svc-fdn6r [1.552065952s]
Aug 24 04:09:31.569: INFO: Created: latency-svc-p8mjc
Aug 24 04:09:31.578: INFO: Got endpoints: latency-svc-p8mjc [1.510828805s]
Aug 24 04:09:31.606: INFO: Created: latency-svc-b2dhh
Aug 24 04:09:31.623: INFO: Got endpoints: latency-svc-b2dhh [1.392815096s]
Aug 24 04:09:31.661: INFO: Created: latency-svc-wgc9v
Aug 24 04:09:31.719: INFO: Got endpoints: latency-svc-wgc9v [1.423271096s]
Aug 24 04:09:31.737: INFO: Created: latency-svc-hv49b
Aug 24 04:09:31.753: INFO: Got endpoints: latency-svc-hv49b [1.362552768s]
Aug 24 04:09:31.783: INFO: Created: latency-svc-svwvh
Aug 24 04:09:31.793: INFO: Got endpoints: latency-svc-svwvh [1.224934024s]
Aug 24 04:09:31.859: INFO: Created: latency-svc-qpj4k
Aug 24 04:09:31.888: INFO: Got endpoints: latency-svc-qpj4k [1.175459608s]
Aug 24 04:09:31.941: INFO: Created: latency-svc-wwsq4
Aug 24 04:09:31.999: INFO: Got endpoints: latency-svc-wwsq4 [1.217878044s]
Aug 24 04:09:32.023: INFO: Created: latency-svc-ps544
Aug 24 04:09:32.055: INFO: Got endpoints: latency-svc-ps544 [1.145236601s]
Aug 24 04:09:32.143: INFO: Created: latency-svc-w5pvj
Aug 24 04:09:32.154: INFO: Got endpoints: latency-svc-w5pvj [1.182289454s]
Aug 24 04:09:32.197: INFO: Created: latency-svc-ttlj7
Aug 24 04:09:32.215: INFO: Got endpoints: latency-svc-ttlj7 [1.11772557s]
Aug 24 04:09:32.239: INFO: Created: latency-svc-5qpcz
Aug 24 04:09:32.288: INFO: Got endpoints: latency-svc-5qpcz [1.1557437s]
Aug 24 04:09:32.315: INFO: Created: latency-svc-kjbtv
Aug 24 04:09:32.325: INFO: Got endpoints: latency-svc-kjbtv [1.091505804s]
Aug 24 04:09:32.349: INFO: Created: latency-svc-7pjhd
Aug 24 04:09:32.386: INFO: Got endpoints: latency-svc-7pjhd [1.083485281s]
Aug 24 04:09:32.479: INFO: Created: latency-svc-pv7dn
Aug 24 04:09:32.535: INFO: Got endpoints: latency-svc-pv7dn [1.128801633s]
Aug 24 04:09:32.619: INFO: Created: latency-svc-hlqq8
Aug 24 04:09:32.637: INFO: Got endpoints: latency-svc-hlqq8 [1.155365003s]
Aug 24 04:09:32.704: INFO: Created: latency-svc-4m9jk
Aug 24 04:09:32.773: INFO: Got endpoints: latency-svc-4m9jk [1.194376992s]
Aug 24 04:09:32.775: INFO: Created: latency-svc-p8tj7
Aug 24 04:09:32.788: INFO: Got endpoints: latency-svc-p8tj7 [1.165140885s]
Aug 24 04:09:32.848: INFO: Created: latency-svc-pkx9c
Aug 24 04:09:32.866: INFO: Got endpoints: latency-svc-pkx9c [1.146752484s]
Aug 24 04:09:32.923: INFO: Created: latency-svc-gqcj6
Aug 24 04:09:32.928: INFO: Got endpoints: latency-svc-gqcj6 [1.174719658s]
Aug 24 04:09:32.966: INFO: Created: latency-svc-rp2lj
Aug 24 04:09:32.975: INFO: Got endpoints: latency-svc-rp2lj [1.182384432s]
Aug 24 04:09:32.998: INFO: Created: latency-svc-6xbhq
Aug 24 04:09:33.018: INFO: Got endpoints: latency-svc-6xbhq [1.129435946s]
Aug 24 04:09:33.084: INFO: Created: latency-svc-m8h5q
Aug 24 04:09:33.086: INFO: Got endpoints: latency-svc-m8h5q [1.086862434s]
Aug 24 04:09:33.145: INFO: Created: latency-svc-cgp5b
Aug 24 04:09:33.252: INFO: Got endpoints: latency-svc-cgp5b [1.196463743s]
Aug 24 04:09:33.266: INFO: Created: latency-svc-t9fbp
Aug 24 04:09:33.309: INFO: Got endpoints: latency-svc-t9fbp [1.154563893s]
Aug 24 04:09:33.438: INFO: Created: latency-svc-fwhm2
Aug 24 04:09:33.440: INFO: Got endpoints: latency-svc-fwhm2 [1.224505942s]
Aug 24 04:09:33.501: INFO: Created: latency-svc-6zhn2
Aug 24 04:09:33.524: INFO: Got endpoints: latency-svc-6zhn2 [1.235197637s]
Aug 24 04:09:33.670: INFO: Created: latency-svc-4mwg8
Aug 24 04:09:33.778: INFO: Got endpoints: latency-svc-4mwg8 [1.45365469s]
Aug 24 04:09:33.790: INFO: Created: latency-svc-tsvjv
Aug 24 04:09:33.806: INFO: Got endpoints: latency-svc-tsvjv [1.420199195s]
Aug 24 04:09:33.836: INFO: Created: latency-svc-rhgxt
Aug 24 04:09:33.865: INFO: Got endpoints: latency-svc-rhgxt [1.330114066s]
Aug 24 04:09:33.976: INFO: Created: latency-svc-rkbwq
Aug 24 04:09:33.992: INFO: Got endpoints: latency-svc-rkbwq [1.354771918s]
Aug 24 04:09:34.022: INFO: Created: latency-svc-n6xpd
Aug 24 04:09:34.041: INFO: Got endpoints: latency-svc-n6xpd [1.267716348s]
Aug 24 04:09:34.064: INFO: Created: latency-svc-cm55x
Aug 24 04:09:34.125: INFO: Got endpoints: latency-svc-cm55x [1.336350041s]
Aug 24 04:09:34.142: INFO: Created: latency-svc-cpgtl
Aug 24 04:09:34.174: INFO: Got endpoints: latency-svc-cpgtl [1.307683837s]
Aug 24 04:09:34.215: INFO: Created: latency-svc-qjnb8
Aug 24 04:09:34.275: INFO: Got endpoints: latency-svc-qjnb8 [1.346128841s]
Aug 24 04:09:34.281: INFO: Created: latency-svc-fcjv4
Aug 24 04:09:34.315: INFO: Got endpoints: latency-svc-fcjv4 [1.33920817s]
Aug 24 04:09:34.345: INFO: Created: latency-svc-zdnfc
Aug 24 04:09:34.366: INFO: Got endpoints: latency-svc-zdnfc [1.348302576s]
Aug 24 04:09:34.419: INFO: Created: latency-svc-bnh7c
Aug 24 04:09:34.421: INFO: Got endpoints: latency-svc-bnh7c [1.334370368s]
Aug 24 04:09:34.455: INFO: Created: latency-svc-fgqt7
Aug 24 04:09:34.486: INFO: Got endpoints: latency-svc-fgqt7 [1.233721326s]
Aug 24 04:09:34.515: INFO: Created: latency-svc-6jwmw
Aug 24 04:09:34.569: INFO: Got endpoints: latency-svc-6jwmw [1.259251064s]
Aug 24 04:09:34.586: INFO: Created: latency-svc-d2lz9
Aug 24 04:09:34.602: INFO: Got endpoints: latency-svc-d2lz9 [1.162195547s]
Aug 24 04:09:34.640: INFO: Created: latency-svc-jsm2j
Aug 24 04:09:34.663: INFO: Got endpoints: latency-svc-jsm2j [1.138998031s]
Aug 24 04:09:34.736: INFO: Created: latency-svc-srgpg
Aug 24 04:09:34.741: INFO: Got endpoints: latency-svc-srgpg [961.889764ms]
Aug 24 04:09:34.946: INFO: Created: latency-svc-m8c94
Aug 24 04:09:35.000: INFO: Got endpoints: latency-svc-m8c94 [1.193180913s]
Aug 24 04:09:35.155: INFO: Created: latency-svc-kxl8x
Aug 24 04:09:35.186: INFO: Got endpoints: latency-svc-kxl8x [1.320580727s]
Aug 24 04:09:35.305: INFO: Created: latency-svc-84pqt
Aug 24 04:09:35.337: INFO: Got endpoints: latency-svc-84pqt [1.344720483s]
Aug 24 04:09:35.378: INFO: Created: latency-svc-hptfn
Aug 24 04:09:35.455: INFO: Got endpoints: latency-svc-hptfn [1.413660325s]
Aug 24 04:09:35.461: INFO: Created: latency-svc-44hkc
Aug 24 04:09:35.482: INFO: Got endpoints: latency-svc-44hkc [1.35688287s]
Aug 24 04:09:35.630: INFO: Created: latency-svc-ckjsn
Aug 24 04:09:36.167: INFO: Got endpoints: latency-svc-ckjsn [1.993494696s]
Aug 24 04:09:36.463: INFO: Created: latency-svc-gfs6q
Aug 24 04:09:36.495: INFO: Got endpoints: latency-svc-gfs6q [2.220386707s]
Aug 24 04:09:36.775: INFO: Created: latency-svc-2kqg4
Aug 24 04:09:36.965: INFO: Got endpoints: latency-svc-2kqg4 [2.649429547s]
Aug 24 04:09:37.041: INFO: Created: latency-svc-c6265
Aug 24 04:09:37.186: INFO: Got endpoints: latency-svc-c6265 [2.819317494s]
Aug 24 04:09:37.384: INFO: Created: latency-svc-tb9wj
Aug 24 04:09:37.419: INFO: Got endpoints: latency-svc-tb9wj [2.998078048s]
Aug 24 04:09:37.641: INFO: Created: latency-svc-wk996
Aug 24 04:09:37.725: INFO: Created: latency-svc-899wf
Aug 24 04:09:37.725: INFO: Got endpoints: latency-svc-wk996 [3.239409838s]
Aug 24 04:09:37.809: INFO: Got endpoints: latency-svc-899wf [3.24013073s]
Aug 24 04:09:37.830: INFO: Created: latency-svc-vx75n
Aug 24 04:09:37.840: INFO: Got endpoints: latency-svc-vx75n [3.23754728s]
Aug 24 04:09:37.858: INFO: Created: latency-svc-sn5tq
Aug 24 04:09:37.870: INFO: Got endpoints: latency-svc-sn5tq [3.20673023s]
Aug 24 04:09:37.901: INFO: Created: latency-svc-2jl4c
Aug 24 04:09:37.964: INFO: Got endpoints: latency-svc-2jl4c [3.223098695s]
Aug 24 04:09:38.019: INFO: Created: latency-svc-52gtn
Aug 24 04:09:38.033: INFO: Got endpoints: latency-svc-52gtn [3.032839184s]
Aug 24 04:09:38.060: INFO: Created: latency-svc-jft5c
Aug 24 04:09:38.120: INFO: Got endpoints: latency-svc-jft5c [2.933231761s]
Aug 24 04:09:38.128: INFO: Created: latency-svc-mccfd
Aug 24 04:09:38.148: INFO: Got endpoints: latency-svc-mccfd [2.810530034s]
Aug 24 04:09:38.177: INFO: Created: latency-svc-gwlcc
Aug 24 04:09:38.191: INFO: Got endpoints: latency-svc-gwlcc [2.736447774s]
Aug 24 04:09:38.219: INFO: Created: latency-svc-jwjmf
Aug 24 04:09:38.275: INFO: Got endpoints: latency-svc-jwjmf [2.792726867s]
Aug 24 04:09:38.291: INFO: Created: latency-svc-k6nfh
Aug 24 04:09:38.334: INFO: Got endpoints: latency-svc-k6nfh [2.166725303s]
Aug 24 04:09:38.368: INFO: Created: latency-svc-z2xsp
Aug 24 04:09:38.419: INFO: Got endpoints: latency-svc-z2xsp [1.923498459s]
Aug 24 04:09:38.434: INFO: Created: latency-svc-4d54j
Aug 24 04:09:38.486: INFO: Got endpoints: latency-svc-4d54j [1.521556009s]
Aug 24 04:09:38.569: INFO: Created: latency-svc-xl2dx
Aug 24 04:09:38.581: INFO: Got endpoints: latency-svc-xl2dx [1.395272355s]
Aug 24 04:09:38.625: INFO: Created: latency-svc-tnq4p
Aug 24 04:09:38.643: INFO: Got endpoints: latency-svc-tnq4p [1.223542485s]
Aug 24 04:09:38.667: INFO: Created: latency-svc-2chbz
Aug 24 04:09:38.749: INFO: Got endpoints: latency-svc-2chbz [1.022912043s]
Aug 24 04:09:38.758: INFO: Created: latency-svc-hmrmk
Aug 24 04:09:38.776: INFO: Got endpoints: latency-svc-hmrmk [966.487982ms]
Aug 24 04:09:38.800: INFO: Created: latency-svc-mj6lx
Aug 24 04:09:38.818: INFO: Got endpoints: latency-svc-mj6lx [978.25166ms]
Aug 24 04:09:38.935: INFO: Created: latency-svc-hqwc5
Aug 24 04:09:38.942: INFO: Got endpoints: latency-svc-hqwc5 [1.071631181s]
Aug 24 04:09:39.002: INFO: Created: latency-svc-86dmp
Aug 24 04:09:39.004: INFO: Got endpoints: latency-svc-86dmp [1.039687173s]
Aug 24 04:09:39.034: INFO: Created: latency-svc-m7bhq
Aug 24 04:09:39.096: INFO: Got endpoints: latency-svc-m7bhq [1.062683358s]
Aug 24 04:09:39.194: INFO: Created: latency-svc-7t8qx
Aug 24 04:09:39.240: INFO: Got endpoints: latency-svc-7t8qx [1.119903786s]
Aug 24 04:09:39.254: INFO: Created: latency-svc-gmf8c
Aug 24 04:09:39.263: INFO: Got endpoints: latency-svc-gmf8c [1.114783011s]
Aug 24 04:09:39.303: INFO: Created: latency-svc-28pl6
Aug 24 04:09:39.312: INFO: Got endpoints: latency-svc-28pl6 [1.12008949s]
Aug 24 04:09:39.340: INFO: Created: latency-svc-k4tqm
Aug 24 04:09:39.377: INFO: Got endpoints: latency-svc-k4tqm [1.101981767s]
Aug 24 04:09:39.406: INFO: Created: latency-svc-4lfv4
Aug 24 04:09:39.417: INFO: Got endpoints: latency-svc-4lfv4 [1.08269953s]
Aug 24 04:09:39.439: INFO: Created: latency-svc-kzg9b
Aug 24 04:09:39.459: INFO: Got endpoints: latency-svc-kzg9b [1.039568165s]
Aug 24 04:09:39.522: INFO: Created: latency-svc-f4p9k
Aug 24 04:09:39.541: INFO: Got endpoints: latency-svc-f4p9k [1.054549125s]
Aug 24 04:09:39.597: INFO: Created: latency-svc-f45v7
Aug 24 04:09:39.614: INFO: Got endpoints: latency-svc-f45v7 [1.031842953s]
Aug 24 04:09:39.615: INFO: Latencies: [102.754425ms 177.226388ms 271.203081ms 350.47444ms 405.009787ms 440.365089ms 500.603756ms 579.254367ms 691.958934ms 740.913872ms 777.809297ms 867.962646ms 927.673303ms 929.065191ms 946.21967ms 961.889764ms 966.487982ms 972.848234ms 978.25166ms 983.83552ms 986.735757ms 986.752291ms 988.357183ms 1.006269353s 1.007945998s 1.022912043s 1.023237518s 1.031842953s 1.039568165s 1.039687173s 1.041080388s 1.045661198s 1.054549125s 1.062683358s 1.067707746s 1.071631181s 1.077592059s 1.08269953s 1.083485281s 1.086862434s 1.089069162s 1.091505804s 1.09751379s 1.101981767s 1.107276789s 1.10908048s 1.114783011s 1.11772557s 1.119641953s 1.119903786s 1.12008949s 1.126337997s 1.126496728s 1.126655618s 1.128801633s 1.129435946s 1.130655159s 1.130659958s 1.138998031s 1.141418086s 1.141832147s 1.145236601s 1.146752484s 1.154563893s 1.155365003s 1.1557437s 1.158859482s 1.162195547s 1.164500699s 1.165140885s 1.169910211s 1.170394974s 1.174719658s 1.175459608s 1.182289454s 1.182384432s 1.193180913s 1.194376992s 1.196463743s 1.215595298s 1.217878044s 1.220806311s 1.223542485s 1.224505942s 1.224934024s 1.233721326s 1.235197637s 1.236303801s 1.254311721s 1.259251064s 1.267716348s 1.29239997s 1.307683837s 1.320580727s 1.322776195s 1.330114066s 1.334370368s 1.336350041s 1.33920817s 1.344040556s 1.344137508s 1.344720483s 1.345722029s 1.346128841s 1.348302576s 1.354771918s 1.35688287s 1.362552768s 1.392815096s 1.395272355s 1.413660325s 1.420199195s 1.423271096s 1.436991573s 1.437474016s 1.45365469s 1.455232137s 1.458438161s 1.510828805s 1.521556009s 1.523313908s 1.525914173s 1.528195052s 1.528649129s 1.52989975s 1.535781616s 1.53882917s 1.548940331s 1.552065952s 1.554246162s 1.57675142s 1.656103565s 1.656193254s 1.670200762s 1.747373279s 1.782126943s 1.802204089s 1.904535338s 1.923498459s 1.938105733s 1.993494696s 2.018685864s 2.044593446s 2.06478282s 2.078742994s 2.080284473s 2.091075848s 2.096748136s 2.102714488s 2.110401684s 2.118831339s 2.119117375s 2.122100808s 2.132797797s 2.147357678s 2.152898556s 2.154259883s 2.166725303s 2.167407966s 2.175502203s 2.183674801s 2.187158117s 2.190007996s 2.210289483s 2.210997907s 2.214880979s 2.220386707s 2.282265747s 2.321565441s 2.518779462s 2.58868379s 2.649429547s 2.701496719s 2.726715334s 2.736447774s 2.792726867s 2.810530034s 2.819317494s 2.83542325s 2.933231761s 2.998078048s 3.032839184s 3.041784313s 3.20673023s 3.223098695s 3.23754728s 3.239409838s 3.24013073s 3.247025369s 3.407531756s 3.634246201s 3.838274936s 4.008939897s 4.072038838s 4.097700832s 4.114201103s 4.11549216s 4.144056361s 4.153672075s 4.175837562s]
Aug 24 04:09:39.617: INFO: 50 %ile: 1.344137508s
Aug 24 04:09:39.618: INFO: 90 %ile: 2.998078048s
Aug 24 04:09:39.618: INFO: 99 %ile: 4.153672075s
Aug 24 04:09:39.618: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:09:39.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-1913" for this suite.
Aug 24 04:10:45.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:10:45.833: INFO: namespace svc-latency-1913 deletion completed in 1m6.161446104s

• [SLOW TEST:95.170 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:10:45.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Aug 24 04:11:00.283: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8388 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 04:11:00.283: INFO: >>> kubeConfig: /root/.kube/config
I0824 04:11:00.390135       7 log.go:172] (0x6db3260) (0x6db32d0) Create stream
I0824 04:11:00.390309       7 log.go:172] (0x6db3260) (0x6db32d0) Stream added, broadcasting: 1
I0824 04:11:00.394733       7 log.go:172] (0x6db3260) Reply frame received for 1
I0824 04:11:00.395066       7 log.go:172] (0x6db3260) (0x74dd1f0) Create stream
I0824 04:11:00.395213       7 log.go:172] (0x6db3260) (0x74dd1f0) Stream added, broadcasting: 3
I0824 04:11:00.397724       7 log.go:172] (0x6db3260) Reply frame received for 3
I0824 04:11:00.397874       7 log.go:172] (0x6db3260) (0x6db3340) Create stream
I0824 04:11:00.397939       7 log.go:172] (0x6db3260) (0x6db3340) Stream added, broadcasting: 5
I0824 04:11:00.399504       7 log.go:172] (0x6db3260) Reply frame received for 5
I0824 04:11:00.483959       7 log.go:172] (0x6db3260) Data frame received for 5
I0824 04:11:00.484102       7 log.go:172] (0x6db3340) (5) Data frame handling
I0824 04:11:00.484294       7 log.go:172] (0x6db3260) Data frame received for 3
I0824 04:11:00.484495       7 log.go:172] (0x74dd1f0) (3) Data frame handling
I0824 04:11:00.484879       7 log.go:172] (0x74dd1f0) (3) Data frame sent
I0824 04:11:00.485090       7 log.go:172] (0x6db3260) Data frame received for 3
I0824 04:11:00.485283       7 log.go:172] (0x74dd1f0) (3) Data frame handling
I0824 04:11:00.485443       7 log.go:172] (0x6db3260) Data frame received for 1
I0824 04:11:00.485563       7 log.go:172] (0x6db32d0) (1) Data frame handling
I0824 04:11:00.485662       7 log.go:172] (0x6db32d0) (1) Data frame sent
I0824 04:11:00.485754       7 log.go:172] (0x6db3260) (0x6db32d0) Stream removed, broadcasting: 1
I0824 04:11:00.485889       7 log.go:172] (0x6db3260) Go away received
I0824 04:11:00.486338       7 log.go:172] (0x6db3260) (0x6db32d0) Stream removed, broadcasting: 1
I0824 04:11:00.486458       7 log.go:172] (0x6db3260) (0x74dd1f0) Stream removed, broadcasting: 3
I0824 04:11:00.486538       7 log.go:172] (0x6db3260) (0x6db3340) Stream removed, broadcasting: 5
Aug 24 04:11:00.486: INFO: Exec stderr: ""
Aug 24 04:11:00.487: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8388 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 04:11:00.487: INFO: >>> kubeConfig: /root/.kube/config
I0824 04:11:00.582913       7 log.go:172] (0x95827e0) (0x95828c0) Create stream
I0824 04:11:00.583049       7 log.go:172] (0x95827e0) (0x95828c0) Stream added, broadcasting: 1
I0824 04:11:00.586723       7 log.go:172] (0x95827e0) Reply frame received for 1
I0824 04:11:00.586883       7 log.go:172] (0x95827e0) (0x6db33b0) Create stream
I0824 04:11:00.586946       7 log.go:172] (0x95827e0) (0x6db33b0) Stream added, broadcasting: 3
I0824 04:11:00.588576       7 log.go:172] (0x95827e0) Reply frame received for 3
I0824 04:11:00.588827       7 log.go:172] (0x95827e0) (0x95829a0) Create stream
I0824 04:11:00.588922       7 log.go:172] (0x95827e0) (0x95829a0) Stream added, broadcasting: 5
I0824 04:11:00.590401       7 log.go:172] (0x95827e0) Reply frame received for 5
I0824 04:11:00.665994       7 log.go:172] (0x95827e0) Data frame received for 5
I0824 04:11:00.666249       7 log.go:172] (0x95829a0) (5) Data frame handling
I0824 04:11:00.666551       7 log.go:172] (0x95827e0) Data frame received for 3
I0824 04:11:00.666904       7 log.go:172] (0x6db33b0) (3) Data frame handling
I0824 04:11:00.667170       7 log.go:172] (0x6db33b0) (3) Data frame sent
I0824 04:11:00.667341       7 log.go:172] (0x95827e0) Data frame received for 3
I0824 04:11:00.667493       7 log.go:172] (0x6db33b0) (3) Data frame handling
I0824 04:11:00.667695       7 log.go:172] (0x95827e0) Data frame received for 1
I0824 04:11:00.667886       7 log.go:172] (0x95828c0) (1) Data frame handling
I0824 04:11:00.668107       7 log.go:172] (0x95828c0) (1) Data frame sent
I0824 04:11:00.668272       7 log.go:172] (0x95827e0) (0x95828c0) Stream removed, broadcasting: 1
I0824 04:11:00.668420       7 log.go:172] (0x95827e0) Go away received
I0824 04:11:00.669206       7 log.go:172] (0x95827e0) (0x95828c0) Stream removed, broadcasting: 1
I0824 04:11:00.669463       7 log.go:172] (0x95827e0) (0x6db33b0) Stream removed, broadcasting: 3
I0824 04:11:00.669711       7 log.go:172] (0x95827e0) (0x95829a0) Stream removed, broadcasting: 5
Aug 24 04:11:00.669: INFO: Exec stderr: ""
Aug 24 04:11:00.670: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8388 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 04:11:00.670: INFO: >>> kubeConfig: /root/.kube/config
I0824 04:11:00.766144       7 log.go:172] (0x783ad90) (0x783ae70) Create stream
I0824 04:11:00.766373       7 log.go:172] (0x783ad90) (0x783ae70) Stream added, broadcasting: 1
I0824 04:11:00.770013       7 log.go:172] (0x783ad90) Reply frame received for 1
I0824 04:11:00.770176       7 log.go:172] (0x783ad90) (0x7e0e070) Create stream
I0824 04:11:00.770284       7 log.go:172] (0x783ad90) (0x7e0e070) Stream added, broadcasting: 3
I0824 04:11:00.771853       7 log.go:172] (0x783ad90) Reply frame received for 3
I0824 04:11:00.772022       7 log.go:172] (0x783ad90) (0x783af50) Create stream
I0824 04:11:00.772115       7 log.go:172] (0x783ad90) (0x783af50) Stream added, broadcasting: 5
I0824 04:11:00.773753       7 log.go:172] (0x783ad90) Reply frame received for 5
I0824 04:11:00.830385       7 log.go:172] (0x783ad90) Data frame received for 5
I0824 04:11:00.830623       7 log.go:172] (0x783af50) (5) Data frame handling
I0824 04:11:00.830792       7 log.go:172] (0x783ad90) Data frame received for 3
I0824 04:11:00.830971       7 log.go:172] (0x7e0e070) (3) Data frame handling
I0824 04:11:00.831158       7 log.go:172] (0x7e0e070) (3) Data frame sent
I0824 04:11:00.831324       7 log.go:172] (0x783ad90) Data frame received for 3
I0824 04:11:00.831472       7 log.go:172] (0x7e0e070) (3) Data frame handling
I0824 04:11:00.831727       7 log.go:172] (0x783ad90) Data frame received for 1
I0824 04:11:00.831848       7 log.go:172] (0x783ae70) (1) Data frame handling
I0824 04:11:00.831988       7 log.go:172] (0x783ae70) (1) Data frame sent
I0824 04:11:00.832151       7 log.go:172] (0x783ad90) (0x783ae70) Stream removed, broadcasting: 1
I0824 04:11:00.832322       7 log.go:172] (0x783ad90) Go away received
I0824 04:11:00.832613       7 log.go:172] (0x783ad90) (0x783ae70) Stream removed, broadcasting: 1
I0824 04:11:00.832712       7 log.go:172] (0x783ad90) (0x7e0e070) Stream removed, broadcasting: 3
I0824 04:11:00.832886       7 log.go:172] (0x783ad90) (0x783af50) Stream removed, broadcasting: 5
Aug 24 04:11:00.832: INFO: Exec stderr: ""
Aug 24 04:11:00.833: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8388 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 04:11:00.833: INFO: >>> kubeConfig: /root/.kube/config
I0824 04:11:00.932612       7 log.go:172] (0x783b5e0) (0x783b6c0) Create stream
I0824 04:11:00.932823       7 log.go:172] (0x783b5e0) (0x783b6c0) Stream added, broadcasting: 1
I0824 04:11:00.940056       7 log.go:172] (0x783b5e0) Reply frame received for 1
I0824 04:11:00.940295       7 log.go:172] (0x783b5e0) (0x783b7a0) Create stream
I0824 04:11:00.940398       7 log.go:172] (0x783b5e0) (0x783b7a0) Stream added, broadcasting: 3
I0824 04:11:00.942302       7 log.go:172] (0x783b5e0) Reply frame received for 3
I0824 04:11:00.942462       7 log.go:172] (0x783b5e0) (0x783b880) Create stream
I0824 04:11:00.942548       7 log.go:172] (0x783b5e0) (0x783b880) Stream added, broadcasting: 5
I0824 04:11:00.944265       7 log.go:172] (0x783b5e0) Reply frame received for 5
I0824 04:11:01.003884       7 log.go:172] (0x783b5e0) Data frame received for 3
I0824 04:11:01.004084       7 log.go:172] (0x783b7a0) (3) Data frame handling
I0824 04:11:01.004286       7 log.go:172] (0x783b5e0) Data frame received for 5
I0824 04:11:01.004563       7 log.go:172] (0x783b880) (5) Data frame handling
I0824 04:11:01.004806       7 log.go:172] (0x783b7a0) (3) Data frame sent
I0824 04:11:01.004958       7 log.go:172] (0x783b5e0) Data frame received for 3
I0824 04:11:01.005100       7 log.go:172] (0x783b7a0) (3) Data frame handling
I0824 04:11:01.005313       7 log.go:172] (0x783b5e0) Data frame received for 1
I0824 04:11:01.005455       7 log.go:172] (0x783b6c0) (1) Data frame handling
I0824 04:11:01.005609       7 log.go:172] (0x783b6c0) (1) Data frame sent
I0824 04:11:01.005766       7 log.go:172] (0x783b5e0) (0x783b6c0) Stream removed, broadcasting: 1
I0824 04:11:01.005947       7 log.go:172] (0x783b5e0) Go away received
I0824 04:11:01.006444       7 log.go:172] (0x783b5e0) (0x783b6c0) Stream removed, broadcasting: 1
I0824 04:11:01.006591       7 log.go:172] (0x783b5e0) (0x783b7a0) Stream removed, broadcasting: 3
I0824 04:11:01.006694       7 log.go:172] (0x783b5e0) (0x783b880) Stream removed, broadcasting: 5
Aug 24 04:11:01.006: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Aug 24 04:11:01.007: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8388 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 04:11:01.007: INFO: >>> kubeConfig: /root/.kube/config
I0824 04:11:01.104546       7 log.go:172] (0x7e0e620) (0x7e0e690) Create stream
I0824 04:11:01.104693       7 log.go:172] (0x7e0e620) (0x7e0e690) Stream added, broadcasting: 1
I0824 04:11:01.108556       7 log.go:172] (0x7e0e620) Reply frame received for 1
I0824 04:11:01.108848       7 log.go:172] (0x7e0e620) (0x7e0e700) Create stream
I0824 04:11:01.108951       7 log.go:172] (0x7e0e620) (0x7e0e700) Stream added, broadcasting: 3
I0824 04:11:01.110534       7 log.go:172] (0x7e0e620) Reply frame received for 3
I0824 04:11:01.110673       7 log.go:172] (0x7e0e620) (0x74dd3b0) Create stream
I0824 04:11:01.110755       7 log.go:172] (0x7e0e620) (0x74dd3b0) Stream added, broadcasting: 5
I0824 04:11:01.112275       7 log.go:172] (0x7e0e620) Reply frame received for 5
I0824 04:11:01.168455       7 log.go:172] (0x7e0e620) Data frame received for 3
I0824 04:11:01.168636       7 log.go:172] (0x7e0e700) (3) Data frame handling
I0824 04:11:01.168837       7 log.go:172] (0x7e0e620) Data frame received for 5
I0824 04:11:01.169006       7 log.go:172] (0x74dd3b0) (5) Data frame handling
I0824 04:11:01.169159       7 log.go:172] (0x7e0e700) (3) Data frame sent
I0824 04:11:01.169379       7 log.go:172] (0x7e0e620) Data frame received for 3
I0824 04:11:01.169524       7 log.go:172] (0x7e0e700) (3) Data frame handling
I0824 04:11:01.169665       7 log.go:172] (0x7e0e620) Data frame received for 1
I0824 04:11:01.169755       7 log.go:172] (0x7e0e690) (1) Data frame handling
I0824 04:11:01.169853       7 log.go:172] (0x7e0e690) (1) Data frame sent
I0824 04:11:01.169937       7 log.go:172] (0x7e0e620) (0x7e0e690) Stream removed, broadcasting: 1
I0824 04:11:01.170056       7 log.go:172] (0x7e0e620) Go away received
I0824 04:11:01.170507       7 log.go:172] (0x7e0e620) (0x7e0e690) Stream removed, broadcasting: 1
I0824 04:11:01.170590       7 log.go:172] (0x7e0e620) (0x7e0e700) Stream removed, broadcasting: 3
I0824 04:11:01.170663       7 log.go:172] (0x7e0e620) (0x74dd3b0) Stream removed, broadcasting: 5
Aug 24 04:11:01.170: INFO: Exec stderr: ""
Aug 24 04:11:01.170: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8388 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 04:11:01.170: INFO: >>> kubeConfig: /root/.kube/config
I0824 04:11:01.274370       7 log.go:172] (0x6db37a0) (0x6db3810) Create stream
I0824 04:11:01.274615       7 log.go:172] (0x6db37a0) (0x6db3810) Stream added, broadcasting: 1
I0824 04:11:01.280058       7 log.go:172] (0x6db37a0) Reply frame received for 1
I0824 04:11:01.280326       7 log.go:172] (0x6db37a0) (0x783b960) Create stream
I0824 04:11:01.280495       7 log.go:172] (0x6db37a0) (0x783b960) Stream added, broadcasting: 3
I0824 04:11:01.282367       7 log.go:172] (0x6db37a0) Reply frame received for 3
I0824 04:11:01.282608       7 log.go:172] (0x6db37a0) (0x7690000) Create stream
I0824 04:11:01.282736       7 log.go:172] (0x6db37a0) (0x7690000) Stream added, broadcasting: 5
I0824 04:11:01.284453       7 log.go:172] (0x6db37a0) Reply frame received for 5
I0824 04:11:01.345286       7 log.go:172] (0x6db37a0) Data frame received for 3
I0824 04:11:01.345479       7 log.go:172] (0x783b960) (3) Data frame handling
I0824 04:11:01.345588       7 log.go:172] (0x6db37a0) Data frame received for 5
I0824 04:11:01.345712       7 log.go:172] (0x7690000) (5) Data frame handling
I0824 04:11:01.345822       7 log.go:172] (0x783b960) (3) Data frame sent
I0824 04:11:01.345945       7 log.go:172] (0x6db37a0) Data frame received for 3
I0824 04:11:01.346010       7 log.go:172] (0x783b960) (3) Data frame handling
I0824 04:11:01.346398       7 log.go:172] (0x6db37a0) Data frame received for 1
I0824 04:11:01.346494       7 log.go:172] (0x6db3810) (1) Data frame handling
I0824 04:11:01.346645       7 log.go:172] (0x6db3810) (1) Data frame sent
I0824 04:11:01.346791       7 log.go:172] (0x6db37a0) (0x6db3810) Stream removed, broadcasting: 1
I0824 04:11:01.346897       7 log.go:172] (0x6db37a0) Go away received
I0824 04:11:01.347187       7 log.go:172] (0x6db37a0) (0x6db3810) Stream removed, broadcasting: 1
I0824 04:11:01.347295       7 log.go:172] (0x6db37a0) (0x783b960) Stream removed, broadcasting: 3
I0824 04:11:01.347380       7 log.go:172] (0x6db37a0) (0x7690000) Stream removed, broadcasting: 5
Aug 24 04:11:01.347: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Aug 24 04:11:01.347: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8388 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 04:11:01.347: INFO: >>> kubeConfig: /root/.kube/config
I0824 04:11:01.438249       7 log.go:172] (0x6db3b20) (0x6db3b90) Create stream
I0824 04:11:01.438416       7 log.go:172] (0x6db3b20) (0x6db3b90) Stream added, broadcasting: 1
I0824 04:11:01.445518       7 log.go:172] (0x6db3b20) Reply frame received for 1
I0824 04:11:01.445725       7 log.go:172] (0x6db3b20) (0x783ba40) Create stream
I0824 04:11:01.445844       7 log.go:172] (0x6db3b20) (0x783ba40) Stream added, broadcasting: 3
I0824 04:11:01.448942       7 log.go:172] (0x6db3b20) Reply frame received for 3
I0824 04:11:01.449137       7 log.go:172] (0x6db3b20) (0x7e0e7e0) Create stream
I0824 04:11:01.449246       7 log.go:172] (0x6db3b20) (0x7e0e7e0) Stream added, broadcasting: 5
I0824 04:11:01.450620       7 log.go:172] (0x6db3b20) Reply frame received for 5
I0824 04:11:01.519786       7 log.go:172] (0x6db3b20) Data frame received for 3
I0824 04:11:01.519912       7 log.go:172] (0x783ba40) (3) Data frame handling
I0824 04:11:01.519984       7 log.go:172] (0x783ba40) (3) Data frame sent
I0824 04:11:01.520088       7 log.go:172] (0x6db3b20) Data frame received for 5
I0824 04:11:01.520295       7 log.go:172] (0x7e0e7e0) (5) Data frame handling
I0824 04:11:01.520408       7 log.go:172] (0x6db3b20) Data frame received for 3
I0824 04:11:01.520612       7 log.go:172] (0x783ba40) (3) Data frame handling
I0824 04:11:01.521133       7 log.go:172] (0x6db3b20) Data frame received for 1
I0824 04:11:01.521273       7 log.go:172] (0x6db3b90) (1) Data frame handling
I0824 04:11:01.521425       7 log.go:172] (0x6db3b90) (1) Data frame sent
I0824 04:11:01.521576       7 log.go:172] (0x6db3b20) (0x6db3b90) Stream removed, broadcasting: 1
I0824 04:11:01.521767       7 log.go:172] (0x6db3b20) Go away received
I0824 04:11:01.522116       7 log.go:172] (0x6db3b20) (0x6db3b90) Stream removed, broadcasting: 1
I0824 04:11:01.522299       7 log.go:172] (0x6db3b20) (0x783ba40) Stream removed, broadcasting: 3
I0824 04:11:01.522470       7 log.go:172] (0x6db3b20) (0x7e0e7e0) Stream removed, broadcasting: 5
Aug 24 04:11:01.522: INFO: Exec stderr: ""
Aug 24 04:11:01.522: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8388 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 04:11:01.522: INFO: >>> kubeConfig: /root/.kube/config
I0824 04:11:01.623971       7 log.go:172] (0x6db3ea0) (0x6db3f10) Create stream
I0824 04:11:01.624207       7 log.go:172] (0x6db3ea0) (0x6db3f10) Stream added, broadcasting: 1
I0824 04:11:01.631776       7 log.go:172] (0x6db3ea0) Reply frame received for 1
I0824 04:11:01.631995       7 log.go:172] (0x6db3ea0) (0x6db3f80) Create stream
I0824 04:11:01.632103       7 log.go:172] (0x6db3ea0) (0x6db3f80) Stream added, broadcasting: 3
I0824 04:11:01.633816       7 log.go:172] (0x6db3ea0) Reply frame received for 3
I0824 04:11:01.633941       7 log.go:172] (0x6db3ea0) (0x9582a80) Create stream
I0824 04:11:01.634011       7 log.go:172] (0x6db3ea0) (0x9582a80) Stream added, broadcasting: 5
I0824 04:11:01.635327       7 log.go:172] (0x6db3ea0) Reply frame received for 5
I0824 04:11:01.714295       7 log.go:172] (0x6db3ea0) Data frame received for 5
I0824 04:11:01.714608       7 log.go:172] (0x9582a80) (5) Data frame handling
I0824 04:11:01.714870       7 log.go:172] (0x6db3ea0) Data frame received for 3
I0824 04:11:01.715064       7 log.go:172] (0x6db3ea0) Data frame received for 1
I0824 04:11:01.715306       7 log.go:172] (0x6db3f10) (1) Data frame handling
I0824 04:11:01.715478       7 log.go:172] (0x6db3f80) (3) Data frame handling
I0824 04:11:01.715612       7 log.go:172] (0x6db3f80) (3) Data frame sent
I0824 04:11:01.715712       7 log.go:172] (0x6db3ea0) Data frame received for 3
I0824 04:11:01.715806       7 log.go:172] (0x6db3f80) (3) Data frame handling
I0824 04:11:01.715926       7 log.go:172] (0x6db3f10) (1) Data frame sent
I0824 04:11:01.716041       7 log.go:172] (0x6db3ea0) (0x6db3f10) Stream removed, broadcasting: 1
I0824 04:11:01.716167       7 log.go:172] (0x6db3ea0) Go away received
I0824 04:11:01.716435       7 log.go:172] (0x6db3ea0) (0x6db3f10) Stream removed, broadcasting: 1
I0824 04:11:01.716517       7 log.go:172] (0x6db3ea0) (0x6db3f80) Stream removed, broadcasting: 3
I0824 04:11:01.716596       7 log.go:172] (0x6db3ea0) (0x9582a80) Stream removed, broadcasting: 5
Aug 24 04:11:01.716: INFO: Exec stderr: ""
Aug 24 04:11:01.716: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8388 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 04:11:01.717: INFO: >>> kubeConfig: /root/.kube/config
I0824 04:11:01.814426       7 log.go:172] (0x9575180) (0x9575340) Create stream
I0824 04:11:01.814544       7 log.go:172] (0x9575180) (0x9575340) Stream added, broadcasting: 1
I0824 04:11:01.818251       7 log.go:172] (0x9575180) Reply frame received for 1
I0824 04:11:01.818407       7 log.go:172] (0x9575180) (0x76900e0) Create stream
I0824 04:11:01.818481       7 log.go:172] (0x9575180) (0x76900e0) Stream added, broadcasting: 3
I0824 04:11:01.819623       7 log.go:172] (0x9575180) Reply frame received for 3
I0824 04:11:01.819735       7 log.go:172] (0x9575180) (0x9575500) Create stream
I0824 04:11:01.819793       7 log.go:172] (0x9575180) (0x9575500) Stream added, broadcasting: 5
I0824 04:11:01.821103       7 log.go:172] (0x9575180) Reply frame received for 5
I0824 04:11:01.881653       7 log.go:172] (0x9575180) Data frame received for 3
I0824 04:11:01.881869       7 log.go:172] (0x76900e0) (3) Data frame handling
I0824 04:11:01.882018       7 log.go:172] (0x76900e0) (3) Data frame sent
I0824 04:11:01.882127       7 log.go:172] (0x9575180) Data frame received for 3
I0824 04:11:01.882231       7 log.go:172] (0x76900e0) (3) Data frame handling
I0824 04:11:01.882447       7 log.go:172] (0x9575180) Data frame received for 5
I0824 04:11:01.882615       7 log.go:172] (0x9575500) (5) Data frame handling
I0824 04:11:01.883778       7 log.go:172] (0x9575180) Data frame received for 1
I0824 04:11:01.883965       7 log.go:172] (0x9575340) (1) Data frame handling
I0824 04:11:01.884092       7 log.go:172] (0x9575340) (1) Data frame sent
I0824 04:11:01.884212       7 log.go:172] (0x9575180) (0x9575340) Stream removed, broadcasting: 1
I0824 04:11:01.884364       7 log.go:172] (0x9575180) Go away received
I0824 04:11:01.884713       7 log.go:172] (0x9575180) (0x9575340) Stream removed, broadcasting: 1
I0824 04:11:01.884974       7 log.go:172] (0x9575180) (0x76900e0) Stream removed, broadcasting: 3
I0824 04:11:01.885145       7 log.go:172] (0x9575180) (0x9575500) Stream removed, broadcasting: 5
Aug 24 04:11:01.885: INFO: Exec stderr: ""
Aug 24 04:11:01.885: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8388 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 04:11:01.885: INFO: >>> kubeConfig: /root/.kube/config
I0824 04:11:02.006693       7 log.go:172] (0x79c2070) (0x79c2150) Create stream
I0824 04:11:02.006896       7 log.go:172] (0x79c2070) (0x79c2150) Stream added, broadcasting: 1
I0824 04:11:02.012104       7 log.go:172] (0x79c2070) Reply frame received for 1
I0824 04:11:02.012335       7 log.go:172] (0x79c2070) (0x7e0e8c0) Create stream
I0824 04:11:02.012452       7 log.go:172] (0x79c2070) (0x7e0e8c0) Stream added, broadcasting: 3
I0824 04:11:02.014360       7 log.go:172] (0x79c2070) Reply frame received for 3
I0824 04:11:02.014507       7 log.go:172] (0x79c2070) (0x79c2230) Create stream
I0824 04:11:02.014608       7 log.go:172] (0x79c2070) (0x79c2230) Stream added, broadcasting: 5
I0824 04:11:02.016327       7 log.go:172] (0x79c2070) Reply frame received for 5
I0824 04:11:02.097434       7 log.go:172] (0x79c2070) Data frame received for 3
I0824 04:11:02.097623       7 log.go:172] (0x7e0e8c0) (3) Data frame handling
I0824 04:11:02.097725       7 log.go:172] (0x79c2070) Data frame received for 5
I0824 04:11:02.097857       7 log.go:172] (0x79c2230) (5) Data frame handling
I0824 04:11:02.097945       7 log.go:172] (0x7e0e8c0) (3) Data frame sent
I0824 04:11:02.098022       7 log.go:172] (0x79c2070) Data frame received for 3
I0824 04:11:02.098085       7 log.go:172] (0x7e0e8c0) (3) Data frame handling
I0824 04:11:02.099150       7 log.go:172] (0x79c2070) Data frame received for 1
I0824 04:11:02.099240       7 log.go:172] (0x79c2150) (1) Data frame handling
I0824 04:11:02.099368       7 log.go:172] (0x79c2150) (1) Data frame sent
I0824 04:11:02.099485       7 log.go:172] (0x79c2070) (0x79c2150) Stream removed, broadcasting: 1
I0824 04:11:02.099601       7 log.go:172] (0x79c2070) Go away received
I0824 04:11:02.099908       7 log.go:172] (0x79c2070) (0x79c2150) Stream removed, broadcasting: 1
I0824 04:11:02.099992       7 log.go:172] (0x79c2070) (0x7e0e8c0) Stream removed, broadcasting: 3
I0824 04:11:02.100068       7 log.go:172] (0x79c2070) (0x79c2230) Stream removed, broadcasting: 5
Aug 24 04:11:02.100: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:11:02.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-8388" for this suite.
Aug 24 04:11:56.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:11:56.256: INFO: namespace e2e-kubelet-etc-hosts-8388 deletion completed in 54.146834868s

• [SLOW TEST:70.422 seconds]
[k8s.io] KubeletManagedEtcHosts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:11:56.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 24 04:12:02.625: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:12:02.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3316" for this suite.
Aug 24 04:12:08.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:12:08.896: INFO: namespace container-runtime-3316 deletion completed in 6.21431771s

• [SLOW TEST:12.638 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:12:08.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:12:09.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1211" for this suite.
Aug 24 04:12:17.481: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:12:17.643: INFO: namespace kubelet-test-1211 deletion completed in 8.335998259s

• [SLOW TEST:8.745 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:12:17.646: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Aug 24 04:12:19.632: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9032,SelfLink:/api/v1/namespaces/watch-9032/configmaps/e2e-watch-test-label-changed,UID:175ccb46-98e8-4f88-ab5a-6d1deaf29101,ResourceVersion:2282797,Generation:0,CreationTimestamp:2020-08-24 04:12:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 24 04:12:19.634: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9032,SelfLink:/api/v1/namespaces/watch-9032/configmaps/e2e-watch-test-label-changed,UID:175ccb46-98e8-4f88-ab5a-6d1deaf29101,ResourceVersion:2282800,Generation:0,CreationTimestamp:2020-08-24 04:12:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 24 04:12:19.634: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9032,SelfLink:/api/v1/namespaces/watch-9032/configmaps/e2e-watch-test-label-changed,UID:175ccb46-98e8-4f88-ab5a-6d1deaf29101,ResourceVersion:2282808,Generation:0,CreationTimestamp:2020-08-24 04:12:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Aug 24 04:12:30.017: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9032,SelfLink:/api/v1/namespaces/watch-9032/configmaps/e2e-watch-test-label-changed,UID:175ccb46-98e8-4f88-ab5a-6d1deaf29101,ResourceVersion:2282854,Generation:0,CreationTimestamp:2020-08-24 04:12:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 24 04:12:30.018: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9032,SelfLink:/api/v1/namespaces/watch-9032/configmaps/e2e-watch-test-label-changed,UID:175ccb46-98e8-4f88-ab5a-6d1deaf29101,ResourceVersion:2282855,Generation:0,CreationTimestamp:2020-08-24 04:12:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Aug 24 04:12:30.018: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9032,SelfLink:/api/v1/namespaces/watch-9032/configmaps/e2e-watch-test-label-changed,UID:175ccb46-98e8-4f88-ab5a-6d1deaf29101,ResourceVersion:2282856,Generation:0,CreationTimestamp:2020-08-24 04:12:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:12:30.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9032" for this suite.
Aug 24 04:12:36.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:12:37.183: INFO: namespace watch-9032 deletion completed in 7.156571486s

• [SLOW TEST:19.537 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:12:37.186: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:12:44.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-7150" for this suite.
Aug 24 04:12:50.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:12:50.398: INFO: namespace namespaces-7150 deletion completed in 6.137530511s
STEP: Destroying namespace "nsdeletetest-6110" for this suite.
Aug 24 04:12:50.401: INFO: Namespace nsdeletetest-6110 was already deleted
STEP: Destroying namespace "nsdeletetest-2909" for this suite.
Aug 24 04:12:56.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:12:57.364: INFO: namespace nsdeletetest-2909 deletion completed in 6.961996999s

• [SLOW TEST:20.178 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:12:57.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 04:12:57.797: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cb5644d8-0142-4e39-b25d-464f94d901da" in namespace "projected-442" to be "success or failure"
Aug 24 04:12:57.877: INFO: Pod "downwardapi-volume-cb5644d8-0142-4e39-b25d-464f94d901da": Phase="Pending", Reason="", readiness=false. Elapsed: 79.661605ms
Aug 24 04:12:59.968: INFO: Pod "downwardapi-volume-cb5644d8-0142-4e39-b25d-464f94d901da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.169934254s
Aug 24 04:13:02.179: INFO: Pod "downwardapi-volume-cb5644d8-0142-4e39-b25d-464f94d901da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.380874621s
Aug 24 04:13:04.188: INFO: Pod "downwardapi-volume-cb5644d8-0142-4e39-b25d-464f94d901da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.390708941s
STEP: Saw pod success
Aug 24 04:13:04.189: INFO: Pod "downwardapi-volume-cb5644d8-0142-4e39-b25d-464f94d901da" satisfied condition "success or failure"
Aug 24 04:13:04.193: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-cb5644d8-0142-4e39-b25d-464f94d901da container client-container: 
STEP: delete the pod
Aug 24 04:13:04.219: INFO: Waiting for pod downwardapi-volume-cb5644d8-0142-4e39-b25d-464f94d901da to disappear
Aug 24 04:13:04.224: INFO: Pod downwardapi-volume-cb5644d8-0142-4e39-b25d-464f94d901da no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:13:04.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-442" for this suite.
Aug 24 04:13:10.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:13:10.447: INFO: namespace projected-442 deletion completed in 6.214633658s

• [SLOW TEST:13.081 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:13:10.452: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Aug 24 04:13:10.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7379'
Aug 24 04:13:12.251: INFO: stderr: ""
Aug 24 04:13:12.251: INFO: stdout: "pod/pause created\n"
Aug 24 04:13:12.252: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Aug 24 04:13:12.253: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-7379" to be "running and ready"
Aug 24 04:13:12.285: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 32.060215ms
Aug 24 04:13:14.341: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088051618s
Aug 24 04:13:16.388: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.134457501s
Aug 24 04:13:16.388: INFO: Pod "pause" satisfied condition "running and ready"
Aug 24 04:13:16.388: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Aug 24 04:13:16.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-7379'
Aug 24 04:13:17.506: INFO: stderr: ""
Aug 24 04:13:17.506: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Aug 24 04:13:17.506: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7379'
Aug 24 04:13:18.646: INFO: stderr: ""
Aug 24 04:13:18.646: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          6s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Aug 24 04:13:18.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-7379'
Aug 24 04:13:19.774: INFO: stderr: ""
Aug 24 04:13:19.774: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Aug 24 04:13:19.774: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-7379'
Aug 24 04:13:20.919: INFO: stderr: ""
Aug 24 04:13:20.919: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Aug 24 04:13:20.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7379'
Aug 24 04:13:22.670: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 24 04:13:22.670: INFO: stdout: "pod \"pause\" force deleted\n"
Aug 24 04:13:22.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-7379'
Aug 24 04:13:23.903: INFO: stderr: "No resources found.\n"
Aug 24 04:13:23.903: INFO: stdout: ""
Aug 24 04:13:23.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-7379 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 24 04:13:25.055: INFO: stderr: ""
Aug 24 04:13:25.055: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:13:25.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7379" for this suite.
Aug 24 04:13:31.147: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:13:31.295: INFO: namespace kubectl-7379 deletion completed in 6.230815733s

• [SLOW TEST:20.843 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:13:31.296: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-8755958b-7a67-459b-a6a2-7f74468f3918
STEP: Creating a pod to test consume secrets
Aug 24 04:13:31.580: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ddd26225-b9ef-4aa5-95f9-8421772bda54" in namespace "projected-1357" to be "success or failure"
Aug 24 04:13:31.609: INFO: Pod "pod-projected-secrets-ddd26225-b9ef-4aa5-95f9-8421772bda54": Phase="Pending", Reason="", readiness=false. Elapsed: 29.192972ms
Aug 24 04:13:33.618: INFO: Pod "pod-projected-secrets-ddd26225-b9ef-4aa5-95f9-8421772bda54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037373875s
Aug 24 04:13:35.669: INFO: Pod "pod-projected-secrets-ddd26225-b9ef-4aa5-95f9-8421772bda54": Phase="Running", Reason="", readiness=true. Elapsed: 4.08855226s
Aug 24 04:13:37.718: INFO: Pod "pod-projected-secrets-ddd26225-b9ef-4aa5-95f9-8421772bda54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.137601802s
STEP: Saw pod success
Aug 24 04:13:37.718: INFO: Pod "pod-projected-secrets-ddd26225-b9ef-4aa5-95f9-8421772bda54" satisfied condition "success or failure"
Aug 24 04:13:37.765: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-ddd26225-b9ef-4aa5-95f9-8421772bda54 container projected-secret-volume-test: 
STEP: delete the pod
Aug 24 04:13:37.855: INFO: Waiting for pod pod-projected-secrets-ddd26225-b9ef-4aa5-95f9-8421772bda54 to disappear
Aug 24 04:13:37.944: INFO: Pod pod-projected-secrets-ddd26225-b9ef-4aa5-95f9-8421772bda54 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:13:37.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1357" for this suite.
Aug 24 04:13:46.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:13:46.361: INFO: namespace projected-1357 deletion completed in 8.321284208s

• [SLOW TEST:15.066 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:13:46.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0824 04:14:01.160615       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 24 04:14:01.161: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:14:01.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3759" for this suite.
Aug 24 04:14:11.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:14:11.381: INFO: namespace gc-3759 deletion completed in 10.211687572s

• [SLOW TEST:25.018 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:14:11.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-2e3cb438-4bd9-4114-84b4-afd74269510a
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-2e3cb438-4bd9-4114-84b4-afd74269510a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:15:33.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-944" for this suite.
Aug 24 04:15:55.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:15:55.170: INFO: namespace configmap-944 deletion completed in 22.158009681s

• [SLOW TEST:103.787 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:15:55.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 04:15:55.272: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8974903-61bf-4485-9de0-a459aaae4672" in namespace "projected-6687" to be "success or failure"
Aug 24 04:15:55.334: INFO: Pod "downwardapi-volume-a8974903-61bf-4485-9de0-a459aaae4672": Phase="Pending", Reason="", readiness=false. Elapsed: 61.812436ms
Aug 24 04:15:57.342: INFO: Pod "downwardapi-volume-a8974903-61bf-4485-9de0-a459aaae4672": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06935447s
Aug 24 04:15:59.433: INFO: Pod "downwardapi-volume-a8974903-61bf-4485-9de0-a459aaae4672": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16052991s
STEP: Saw pod success
Aug 24 04:15:59.433: INFO: Pod "downwardapi-volume-a8974903-61bf-4485-9de0-a459aaae4672" satisfied condition "success or failure"
Aug 24 04:15:59.438: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a8974903-61bf-4485-9de0-a459aaae4672 container client-container: 
STEP: delete the pod
Aug 24 04:15:59.505: INFO: Waiting for pod downwardapi-volume-a8974903-61bf-4485-9de0-a459aaae4672 to disappear
Aug 24 04:15:59.698: INFO: Pod downwardapi-volume-a8974903-61bf-4485-9de0-a459aaae4672 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:15:59.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6687" for this suite.
Aug 24 04:16:05.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:16:06.105: INFO: namespace projected-6687 deletion completed in 6.205124045s

• [SLOW TEST:10.935 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:16:06.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-228807a2-e639-4956-ac9d-1e7d9dce4dd6
STEP: Creating a pod to test consume secrets
Aug 24 04:16:06.309: INFO: Waiting up to 5m0s for pod "pod-secrets-e5ae32ef-84ea-4b24-8362-b9cdf5650722" in namespace "secrets-9163" to be "success or failure"
Aug 24 04:16:06.375: INFO: Pod "pod-secrets-e5ae32ef-84ea-4b24-8362-b9cdf5650722": Phase="Pending", Reason="", readiness=false. Elapsed: 65.692463ms
Aug 24 04:16:08.383: INFO: Pod "pod-secrets-e5ae32ef-84ea-4b24-8362-b9cdf5650722": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073131003s
Aug 24 04:16:10.389: INFO: Pod "pod-secrets-e5ae32ef-84ea-4b24-8362-b9cdf5650722": Phase="Running", Reason="", readiness=true. Elapsed: 4.079415443s
Aug 24 04:16:12.395: INFO: Pod "pod-secrets-e5ae32ef-84ea-4b24-8362-b9cdf5650722": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.08523106s
STEP: Saw pod success
Aug 24 04:16:12.395: INFO: Pod "pod-secrets-e5ae32ef-84ea-4b24-8362-b9cdf5650722" satisfied condition "success or failure"
Aug 24 04:16:12.399: INFO: Trying to get logs from node iruya-worker pod pod-secrets-e5ae32ef-84ea-4b24-8362-b9cdf5650722 container secret-volume-test: 
STEP: delete the pod
Aug 24 04:16:12.423: INFO: Waiting for pod pod-secrets-e5ae32ef-84ea-4b24-8362-b9cdf5650722 to disappear
Aug 24 04:16:12.427: INFO: Pod pod-secrets-e5ae32ef-84ea-4b24-8362-b9cdf5650722 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:16:12.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9163" for this suite.
Aug 24 04:16:18.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:16:18.592: INFO: namespace secrets-9163 deletion completed in 6.15562893s

• [SLOW TEST:12.485 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:16:18.596: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-22894d86-0df8-40fe-857e-c0d223f6f5df
STEP: Creating a pod to test consume configMaps
Aug 24 04:16:18.709: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b7b1c4be-b340-41af-bc7e-52686caa5c90" in namespace "projected-5624" to be "success or failure"
Aug 24 04:16:18.723: INFO: Pod "pod-projected-configmaps-b7b1c4be-b340-41af-bc7e-52686caa5c90": Phase="Pending", Reason="", readiness=false. Elapsed: 12.882989ms
Aug 24 04:16:20.730: INFO: Pod "pod-projected-configmaps-b7b1c4be-b340-41af-bc7e-52686caa5c90": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020151794s
Aug 24 04:16:22.737: INFO: Pod "pod-projected-configmaps-b7b1c4be-b340-41af-bc7e-52686caa5c90": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027788757s
STEP: Saw pod success
Aug 24 04:16:22.738: INFO: Pod "pod-projected-configmaps-b7b1c4be-b340-41af-bc7e-52686caa5c90" satisfied condition "success or failure"
Aug 24 04:16:22.743: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-b7b1c4be-b340-41af-bc7e-52686caa5c90 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 24 04:16:22.806: INFO: Waiting for pod pod-projected-configmaps-b7b1c4be-b340-41af-bc7e-52686caa5c90 to disappear
Aug 24 04:16:22.844: INFO: Pod pod-projected-configmaps-b7b1c4be-b340-41af-bc7e-52686caa5c90 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:16:22.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5624" for this suite.
Aug 24 04:16:28.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:16:29.017: INFO: namespace projected-5624 deletion completed in 6.161262226s

• [SLOW TEST:10.422 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:16:29.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 04:16:29.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:16:33.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-4274" for this suite.
Aug 24 04:17:15.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:17:15.467: INFO: namespace pods-4274 deletion completed in 42.153372042s

• [SLOW TEST:46.444 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:17:15.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 24 04:17:15.616: INFO: Waiting up to 5m0s for pod "pod-c60c65df-b367-4881-96bf-2bb289155993" in namespace "emptydir-5314" to be "success or failure"
Aug 24 04:17:15.634: INFO: Pod "pod-c60c65df-b367-4881-96bf-2bb289155993": Phase="Pending", Reason="", readiness=false. Elapsed: 17.859485ms
Aug 24 04:17:17.641: INFO: Pod "pod-c60c65df-b367-4881-96bf-2bb289155993": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025328857s
Aug 24 04:17:19.654: INFO: Pod "pod-c60c65df-b367-4881-96bf-2bb289155993": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037634479s
STEP: Saw pod success
Aug 24 04:17:19.654: INFO: Pod "pod-c60c65df-b367-4881-96bf-2bb289155993" satisfied condition "success or failure"
Aug 24 04:17:19.658: INFO: Trying to get logs from node iruya-worker pod pod-c60c65df-b367-4881-96bf-2bb289155993 container test-container: 
STEP: delete the pod
Aug 24 04:17:19.722: INFO: Waiting for pod pod-c60c65df-b367-4881-96bf-2bb289155993 to disappear
Aug 24 04:17:20.043: INFO: Pod pod-c60c65df-b367-4881-96bf-2bb289155993 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:17:20.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5314" for this suite.
Aug 24 04:17:26.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:17:27.173: INFO: namespace emptydir-5314 deletion completed in 7.073157939s

• [SLOW TEST:11.704 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:17:27.175: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 24 04:17:32.393: INFO: Successfully updated pod "pod-update-218f4769-389d-4981-b139-76cf3156c0ad"
STEP: verifying the updated pod is in kubernetes
Aug 24 04:17:32.405: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:17:32.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3922" for this suite.
Aug 24 04:17:54.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:17:54.565: INFO: namespace pods-3922 deletion completed in 22.1509611s

• [SLOW TEST:27.390 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:17:54.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 24 04:17:54.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4207'
Aug 24 04:17:58.811: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 24 04:17:58.811: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Aug 24 04:17:58.857: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-4207'
Aug 24 04:17:59.991: INFO: stderr: ""
Aug 24 04:17:59.991: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:17:59.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4207" for this suite.
Aug 24 04:18:22.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:18:22.215: INFO: namespace kubectl-4207 deletion completed in 22.188919451s

• [SLOW TEST:27.649 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:18:22.218: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 24 04:18:22.315: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:18:31.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4327" for this suite.
Aug 24 04:18:53.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:18:53.298: INFO: namespace init-container-4327 deletion completed in 22.173124806s

• [SLOW TEST:31.080 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:18:53.299: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 24 04:18:53.400: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 24 04:18:53.429: INFO: Waiting for terminating namespaces to be deleted...
Aug 24 04:18:53.436: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 24 04:18:53.459: INFO: daemon-set-2gkvj from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded)
Aug 24 04:18:53.460: INFO: 	Container app ready: true, restart count 0
Aug 24 04:18:53.460: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 24 04:18:53.460: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 24 04:18:53.460: INFO: daemon-set-qwbvn from daemonsets-4407 started at 2020-08-24 03:43:04 +0000 UTC (1 container statuses recorded)
Aug 24 04:18:53.460: INFO: 	Container app ready: true, restart count 0
Aug 24 04:18:53.460: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 24 04:18:53.461: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 24 04:18:53.461: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 24 04:18:53.504: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 24 04:18:53.504: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 24 04:18:53.504: INFO: daemon-set-nk8hf from daemonsets-4407 started at 2020-08-24 03:43:05 +0000 UTC (1 container statuses recorded)
Aug 24 04:18:53.504: INFO: 	Container app ready: true, restart count 0
Aug 24 04:18:53.504: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 24 04:18:53.505: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 24 04:18:53.505: INFO: daemon-set-hlzh5 from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded)
Aug 24 04:18:53.505: INFO: 	Container app ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Aug 24 04:18:53.648: INFO: Pod daemon-set-2gkvj requesting resource cpu=0m on Node iruya-worker
Aug 24 04:18:53.648: INFO: Pod daemon-set-hlzh5 requesting resource cpu=0m on Node iruya-worker2
Aug 24 04:18:53.648: INFO: Pod daemon-set-nk8hf requesting resource cpu=0m on Node iruya-worker2
Aug 24 04:18:53.648: INFO: Pod daemon-set-qwbvn requesting resource cpu=0m on Node iruya-worker
Aug 24 04:18:53.648: INFO: Pod kindnet-nkf5n requesting resource cpu=100m on Node iruya-worker
Aug 24 04:18:53.648: INFO: Pod kindnet-xsdzz requesting resource cpu=100m on Node iruya-worker2
Aug 24 04:18:53.648: INFO: Pod kube-proxy-5zw8s requesting resource cpu=0m on Node iruya-worker
Aug 24 04:18:53.648: INFO: Pod kube-proxy-b98qt requesting resource cpu=0m on Node iruya-worker2
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bbcd431e-7750-4857-93f3-d206c276bd7b.162e194c67234a26], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8283/filler-pod-bbcd431e-7750-4857-93f3-d206c276bd7b to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bbcd431e-7750-4857-93f3-d206c276bd7b.162e194d1d96a130], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bbcd431e-7750-4857-93f3-d206c276bd7b.162e194d59b01e17], Reason = [Created], Message = [Created container filler-pod-bbcd431e-7750-4857-93f3-d206c276bd7b]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-bbcd431e-7750-4857-93f3-d206c276bd7b.162e194d6a62d983], Reason = [Started], Message = [Started container filler-pod-bbcd431e-7750-4857-93f3-d206c276bd7b]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ca7fee2b-f7c9-4370-935e-b6a2d72d6bf9.162e194c6636c3d6], Reason = [Scheduled], Message = [Successfully assigned sched-pred-8283/filler-pod-ca7fee2b-f7c9-4370-935e-b6a2d72d6bf9 to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ca7fee2b-f7c9-4370-935e-b6a2d72d6bf9.162e194cba090ec3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ca7fee2b-f7c9-4370-935e-b6a2d72d6bf9.162e194d18ec0d12], Reason = [Created], Message = [Created container filler-pod-ca7fee2b-f7c9-4370-935e-b6a2d72d6bf9]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-ca7fee2b-f7c9-4370-935e-b6a2d72d6bf9.162e194d2ec102cc], Reason = [Started], Message = [Started container filler-pod-ca7fee2b-f7c9-4370-935e-b6a2d72d6bf9]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.162e194dd16bc602], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:19:00.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8283" for this suite.
Aug 24 04:19:06.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:19:07.363: INFO: namespace sched-pred-8283 deletion completed in 6.391922802s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:14.065 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:19:07.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-fe8c821b-44ba-40e0-a60d-6955354a42d8
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:19:07.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5619" for this suite.
Aug 24 04:19:13.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:19:13.670: INFO: namespace configmap-5619 deletion completed in 6.143824856s

• [SLOW TEST:6.300 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:19:13.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-7144/configmap-test-2ebd4329-7671-4e8e-a726-8ecaab13746b
STEP: Creating a pod to test consume configMaps
Aug 24 04:19:14.039: INFO: Waiting up to 5m0s for pod "pod-configmaps-8d876d22-d8dc-4e90-b0e2-12fa1faff80c" in namespace "configmap-7144" to be "success or failure"
Aug 24 04:19:14.092: INFO: Pod "pod-configmaps-8d876d22-d8dc-4e90-b0e2-12fa1faff80c": Phase="Pending", Reason="", readiness=false. Elapsed: 52.30312ms
Aug 24 04:19:16.100: INFO: Pod "pod-configmaps-8d876d22-d8dc-4e90-b0e2-12fa1faff80c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060194804s
Aug 24 04:19:18.107: INFO: Pod "pod-configmaps-8d876d22-d8dc-4e90-b0e2-12fa1faff80c": Phase="Running", Reason="", readiness=true. Elapsed: 4.067559533s
Aug 24 04:19:20.114: INFO: Pod "pod-configmaps-8d876d22-d8dc-4e90-b0e2-12fa1faff80c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.074811779s
STEP: Saw pod success
Aug 24 04:19:20.115: INFO: Pod "pod-configmaps-8d876d22-d8dc-4e90-b0e2-12fa1faff80c" satisfied condition "success or failure"
Aug 24 04:19:20.120: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-8d876d22-d8dc-4e90-b0e2-12fa1faff80c container env-test: 
STEP: delete the pod
Aug 24 04:19:20.174: INFO: Waiting for pod pod-configmaps-8d876d22-d8dc-4e90-b0e2-12fa1faff80c to disappear
Aug 24 04:19:20.231: INFO: Pod pod-configmaps-8d876d22-d8dc-4e90-b0e2-12fa1faff80c no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:19:20.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7144" for this suite.
Aug 24 04:19:26.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:19:26.453: INFO: namespace configmap-7144 deletion completed in 6.210571394s

• [SLOW TEST:12.783 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:19:26.457: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Aug 24 04:19:26.505: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9872'
Aug 24 04:19:28.061: INFO: stderr: ""
Aug 24 04:19:28.061: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Aug 24 04:19:29.071: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 04:19:29.072: INFO: Found 0 / 1
Aug 24 04:19:30.070: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 04:19:30.070: INFO: Found 0 / 1
Aug 24 04:19:31.069: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 04:19:31.069: INFO: Found 0 / 1
Aug 24 04:19:32.069: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 04:19:32.070: INFO: Found 1 / 1
Aug 24 04:19:32.070: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 24 04:19:32.075: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 04:19:32.076: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Aug 24 04:19:32.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mm8zb redis-master --namespace=kubectl-9872'
Aug 24 04:19:33.243: INFO: stderr: ""
Aug 24 04:19:33.243: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Aug 04:19:31.025 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Aug 04:19:31.025 # Server started, Redis version 3.2.12\n1:M 24 Aug 04:19:31.025 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Aug 04:19:31.025 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Aug 24 04:19:33.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mm8zb redis-master --namespace=kubectl-9872 --tail=1'
Aug 24 04:19:34.419: INFO: stderr: ""
Aug 24 04:19:34.420: INFO: stdout: "1:M 24 Aug 04:19:31.025 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Aug 24 04:19:34.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mm8zb redis-master --namespace=kubectl-9872 --limit-bytes=1'
Aug 24 04:19:35.574: INFO: stderr: ""
Aug 24 04:19:35.574: INFO: stdout: " "
STEP: exposing timestamps
Aug 24 04:19:35.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mm8zb redis-master --namespace=kubectl-9872 --tail=1 --timestamps'
Aug 24 04:19:36.788: INFO: stderr: ""
Aug 24 04:19:36.788: INFO: stdout: "2020-08-24T04:19:31.026020385Z 1:M 24 Aug 04:19:31.025 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Aug 24 04:19:39.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mm8zb redis-master --namespace=kubectl-9872 --since=1s'
Aug 24 04:19:40.470: INFO: stderr: ""
Aug 24 04:19:40.470: INFO: stdout: ""
Aug 24 04:19:40.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-mm8zb redis-master --namespace=kubectl-9872 --since=24h'
Aug 24 04:19:41.635: INFO: stderr: ""
Aug 24 04:19:41.635: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Aug 04:19:31.025 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Aug 04:19:31.025 # Server started, Redis version 3.2.12\n1:M 24 Aug 04:19:31.025 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Aug 04:19:31.025 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Aug 24 04:19:41.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9872'
Aug 24 04:19:42.716: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 24 04:19:42.717: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Aug 24 04:19:42.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-9872'
Aug 24 04:19:43.874: INFO: stderr: "No resources found.\n"
Aug 24 04:19:43.874: INFO: stdout: ""
Aug 24 04:19:43.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-9872 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 24 04:19:44.982: INFO: stderr: ""
Aug 24 04:19:44.982: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:19:44.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9872" for this suite.
Aug 24 04:20:07.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:20:07.160: INFO: namespace kubectl-9872 deletion completed in 22.167930859s

• [SLOW TEST:40.703 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:20:07.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:20:34.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9691" for this suite.
Aug 24 04:20:40.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:20:40.695: INFO: namespace namespaces-9691 deletion completed in 6.164410452s
STEP: Destroying namespace "nsdeletetest-9270" for this suite.
Aug 24 04:20:40.698: INFO: Namespace nsdeletetest-9270 was already deleted
STEP: Destroying namespace "nsdeletetest-6397" for this suite.
Aug 24 04:20:46.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:20:46.934: INFO: namespace nsdeletetest-6397 deletion completed in 6.235481112s

• [SLOW TEST:39.773 seconds]
[sig-api-machinery] Namespaces [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:20:46.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 24 04:20:46.999: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 24 04:20:47.027: INFO: Waiting for terminating namespaces to be deleted...
Aug 24 04:20:47.059: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 24 04:20:47.074: INFO: daemon-set-2gkvj from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded)
Aug 24 04:20:47.074: INFO: 	Container app ready: true, restart count 0
Aug 24 04:20:47.074: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 24 04:20:47.074: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 24 04:20:47.074: INFO: daemon-set-qwbvn from daemonsets-4407 started at 2020-08-24 03:43:04 +0000 UTC (1 container statuses recorded)
Aug 24 04:20:47.074: INFO: 	Container app ready: true, restart count 0
Aug 24 04:20:47.074: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 24 04:20:47.074: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 24 04:20:47.075: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 24 04:20:47.089: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 24 04:20:47.089: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 24 04:20:47.089: INFO: daemon-set-nk8hf from daemonsets-4407 started at 2020-08-24 03:43:05 +0000 UTC (1 container statuses recorded)
Aug 24 04:20:47.089: INFO: 	Container app ready: true, restart count 0
Aug 24 04:20:47.089: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 24 04:20:47.089: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 24 04:20:47.090: INFO: daemon-set-hlzh5 from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded)
Aug 24 04:20:47.090: INFO: 	Container app ready: true, restart count 0
[It] validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-8433e5ce-4624-4022-afe4-42538ff92e93 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-8433e5ce-4624-4022-afe4-42538ff92e93 off the node iruya-worker2
STEP: verifying the node doesn't have the label kubernetes.io/e2e-8433e5ce-4624-4022-afe4-42538ff92e93
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:20:57.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5640" for this suite.
Aug 24 04:21:15.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:21:15.458: INFO: namespace sched-pred-5640 deletion completed in 18.179808129s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:28.523 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:21:15.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:21:47.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7233" for this suite.
Aug 24 04:21:53.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:21:53.292: INFO: namespace container-runtime-7233 deletion completed in 6.172437723s

• [SLOW TEST:37.832 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:21:53.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:21:57.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6955" for this suite.
Aug 24 04:22:03.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:22:03.662: INFO: namespace kubelet-test-6955 deletion completed in 6.244833301s

• [SLOW TEST:10.368 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:22:03.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-jf9b
STEP: Creating a pod to test atomic-volume-subpath
Aug 24 04:22:03.886: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jf9b" in namespace "subpath-9370" to be "success or failure"
Aug 24 04:22:03.922: INFO: Pod "pod-subpath-test-configmap-jf9b": Phase="Pending", Reason="", readiness=false. Elapsed: 35.894612ms
Aug 24 04:22:05.928: INFO: Pod "pod-subpath-test-configmap-jf9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041869436s
Aug 24 04:22:07.977: INFO: Pod "pod-subpath-test-configmap-jf9b": Phase="Running", Reason="", readiness=true. Elapsed: 4.090727578s
Aug 24 04:22:09.984: INFO: Pod "pod-subpath-test-configmap-jf9b": Phase="Running", Reason="", readiness=true. Elapsed: 6.098032616s
Aug 24 04:22:11.993: INFO: Pod "pod-subpath-test-configmap-jf9b": Phase="Running", Reason="", readiness=true. Elapsed: 8.107109637s
Aug 24 04:22:14.001: INFO: Pod "pod-subpath-test-configmap-jf9b": Phase="Running", Reason="", readiness=true. Elapsed: 10.114992069s
Aug 24 04:22:16.008: INFO: Pod "pod-subpath-test-configmap-jf9b": Phase="Running", Reason="", readiness=true. Elapsed: 12.121744879s
Aug 24 04:22:18.014: INFO: Pod "pod-subpath-test-configmap-jf9b": Phase="Running", Reason="", readiness=true. Elapsed: 14.128253284s
Aug 24 04:22:20.022: INFO: Pod "pod-subpath-test-configmap-jf9b": Phase="Running", Reason="", readiness=true. Elapsed: 16.135934481s
Aug 24 04:22:22.030: INFO: Pod "pod-subpath-test-configmap-jf9b": Phase="Running", Reason="", readiness=true. Elapsed: 18.143967329s
Aug 24 04:22:24.038: INFO: Pod "pod-subpath-test-configmap-jf9b": Phase="Running", Reason="", readiness=true. Elapsed: 20.152465932s
Aug 24 04:22:26.046: INFO: Pod "pod-subpath-test-configmap-jf9b": Phase="Running", Reason="", readiness=true. Elapsed: 22.160297741s
Aug 24 04:22:28.053: INFO: Pod "pod-subpath-test-configmap-jf9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.16701223s
STEP: Saw pod success
Aug 24 04:22:28.053: INFO: Pod "pod-subpath-test-configmap-jf9b" satisfied condition "success or failure"
Aug 24 04:22:28.060: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-jf9b container test-container-subpath-configmap-jf9b: 
STEP: delete the pod
Aug 24 04:22:28.080: INFO: Waiting for pod pod-subpath-test-configmap-jf9b to disappear
Aug 24 04:22:28.084: INFO: Pod pod-subpath-test-configmap-jf9b no longer exists
STEP: Deleting pod pod-subpath-test-configmap-jf9b
Aug 24 04:22:28.085: INFO: Deleting pod "pod-subpath-test-configmap-jf9b" in namespace "subpath-9370"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:22:28.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-9370" for this suite.
Aug 24 04:22:34.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:22:34.254: INFO: namespace subpath-9370 deletion completed in 6.159870542s

• [SLOW TEST:30.591 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:22:34.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Aug 24 04:22:38.924: INFO: Successfully updated pod "pod-update-activedeadlineseconds-f89fbd16-a890-4f0c-bffd-4e9d13dc0db9"
Aug 24 04:22:38.925: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-f89fbd16-a890-4f0c-bffd-4e9d13dc0db9" in namespace "pods-5773" to be "terminated due to deadline exceeded"
Aug 24 04:22:38.935: INFO: Pod "pod-update-activedeadlineseconds-f89fbd16-a890-4f0c-bffd-4e9d13dc0db9": Phase="Running", Reason="", readiness=true. Elapsed: 10.528012ms
Aug 24 04:22:40.943: INFO: Pod "pod-update-activedeadlineseconds-f89fbd16-a890-4f0c-bffd-4e9d13dc0db9": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.017636587s
Aug 24 04:22:40.943: INFO: Pod "pod-update-activedeadlineseconds-f89fbd16-a890-4f0c-bffd-4e9d13dc0db9" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:22:40.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5773" for this suite.
Aug 24 04:22:47.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:22:47.157: INFO: namespace pods-5773 deletion completed in 6.201987072s

• [SLOW TEST:12.900 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:22:47.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-cc8f3af6-2cbe-4453-9c25-c19122843063
STEP: Creating a pod to test consume secrets
Aug 24 04:22:47.279: INFO: Waiting up to 5m0s for pod "pod-secrets-49c6c128-2d19-422f-9e59-41a690e35efb" in namespace "secrets-898" to be "success or failure"
Aug 24 04:22:47.296: INFO: Pod "pod-secrets-49c6c128-2d19-422f-9e59-41a690e35efb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.434247ms
Aug 24 04:22:49.305: INFO: Pod "pod-secrets-49c6c128-2d19-422f-9e59-41a690e35efb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025304075s
Aug 24 04:22:51.311: INFO: Pod "pod-secrets-49c6c128-2d19-422f-9e59-41a690e35efb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031798433s
STEP: Saw pod success
Aug 24 04:22:51.311: INFO: Pod "pod-secrets-49c6c128-2d19-422f-9e59-41a690e35efb" satisfied condition "success or failure"
Aug 24 04:22:51.315: INFO: Trying to get logs from node iruya-worker pod pod-secrets-49c6c128-2d19-422f-9e59-41a690e35efb container secret-volume-test: 
STEP: delete the pod
Aug 24 04:22:51.360: INFO: Waiting for pod pod-secrets-49c6c128-2d19-422f-9e59-41a690e35efb to disappear
Aug 24 04:22:51.594: INFO: Pod pod-secrets-49c6c128-2d19-422f-9e59-41a690e35efb no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:22:51.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-898" for this suite.
Aug 24 04:22:57.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:22:57.773: INFO: namespace secrets-898 deletion completed in 6.165578473s

• [SLOW TEST:10.608 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:22:57.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 24 04:22:57.867: INFO: PodSpec: initContainers in spec.initContainers
Aug 24 04:23:47.796: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a4c51943-deb6-478b-b12f-58f6a5f6e14f", GenerateName:"", Namespace:"init-container-9964", SelfLink:"/api/v1/namespaces/init-container-9964/pods/pod-init-a4c51943-deb6-478b-b12f-58f6a5f6e14f", UID:"d08197d5-83f2-465f-89d4-f911dcb7d134", ResourceVersion:"2285486", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733839777, loc:(*time.Location)(0x67985e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"866217028"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-d8p5g", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0x8d6e2c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d8p5g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d8p5g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-d8p5g", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0x93963d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0x90bf200), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x9396460)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0x9396480)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0x9396488), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0x939648c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733839778, loc:(*time.Location)(0x67985e0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733839778, loc:(*time.Location)(0x67985e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733839778, loc:(*time.Location)(0x67985e0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733839777, loc:(*time.Location)(0x67985e0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"10.244.2.165", StartTime:(*v1.Time)(0x8d6e4e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0x8d6e520), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0x9074fa0)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://dbe319777d93f01d12cf0db14053ad312d9c453f2e39758436e1308e7b4fb943"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x7c72f60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0x7c72f50), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:23:47.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9964" for this suite.
Aug 24 04:24:09.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:24:10.097: INFO: namespace init-container-9964 deletion completed in 22.271731742s

• [SLOW TEST:72.323 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:24:10.100: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 04:24:10.538: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0918ef4a-ad14-408d-b385-f019e71c2fd8" in namespace "projected-8706" to be "success or failure"
Aug 24 04:24:10.811: INFO: Pod "downwardapi-volume-0918ef4a-ad14-408d-b385-f019e71c2fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 273.183268ms
Aug 24 04:24:12.818: INFO: Pod "downwardapi-volume-0918ef4a-ad14-408d-b385-f019e71c2fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.280338825s
Aug 24 04:24:14.824: INFO: Pod "downwardapi-volume-0918ef4a-ad14-408d-b385-f019e71c2fd8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.286332908s
Aug 24 04:24:16.831: INFO: Pod "downwardapi-volume-0918ef4a-ad14-408d-b385-f019e71c2fd8": Phase="Running", Reason="", readiness=true. Elapsed: 6.292788157s
Aug 24 04:24:18.837: INFO: Pod "downwardapi-volume-0918ef4a-ad14-408d-b385-f019e71c2fd8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.299512898s
STEP: Saw pod success
Aug 24 04:24:18.838: INFO: Pod "downwardapi-volume-0918ef4a-ad14-408d-b385-f019e71c2fd8" satisfied condition "success or failure"
Aug 24 04:24:18.842: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0918ef4a-ad14-408d-b385-f019e71c2fd8 container client-container: 
STEP: delete the pod
Aug 24 04:24:18.881: INFO: Waiting for pod downwardapi-volume-0918ef4a-ad14-408d-b385-f019e71c2fd8 to disappear
Aug 24 04:24:18.889: INFO: Pod downwardapi-volume-0918ef4a-ad14-408d-b385-f019e71c2fd8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:24:18.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8706" for this suite.
Aug 24 04:24:24.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:24:25.087: INFO: namespace projected-8706 deletion completed in 6.164207478s

• [SLOW TEST:14.988 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:24:25.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 24 04:24:32.365: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:24:32.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1259" for this suite.
Aug 24 04:24:54.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:24:54.636: INFO: namespace replicaset-1259 deletion completed in 22.216255089s

• [SLOW TEST:29.547 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:24:54.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1719
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug 24 04:24:54.827: INFO: Found 0 stateful pods, waiting for 3
Aug 24 04:25:04.837: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 04:25:04.837: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 04:25:04.837: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Aug 24 04:25:14.837: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 04:25:14.837: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 04:25:14.837: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 24 04:25:14.877: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Aug 24 04:25:24.922: INFO: Updating stateful set ss2
Aug 24 04:25:25.490: INFO: Waiting for Pod statefulset-1719/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Aug 24 04:25:35.763: INFO: Found 2 stateful pods, waiting for 3
Aug 24 04:25:45.774: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 04:25:45.774: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 04:25:45.774: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Aug 24 04:25:45.810: INFO: Updating stateful set ss2
Aug 24 04:25:45.868: INFO: Waiting for Pod statefulset-1719/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 24 04:25:55.910: INFO: Updating stateful set ss2
Aug 24 04:25:56.065: INFO: Waiting for StatefulSet statefulset-1719/ss2 to complete update
Aug 24 04:25:56.066: INFO: Waiting for Pod statefulset-1719/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Aug 24 04:26:06.082: INFO: Waiting for StatefulSet statefulset-1719/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 24 04:26:16.082: INFO: Deleting all statefulset in ns statefulset-1719
Aug 24 04:26:16.112: INFO: Scaling statefulset ss2 to 0
Aug 24 04:26:46.206: INFO: Waiting for statefulset status.replicas updated to 0
Aug 24 04:26:46.211: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:26:46.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1719" for this suite.
Aug 24 04:26:54.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:26:54.401: INFO: namespace statefulset-1719 deletion completed in 8.157393919s

• [SLOW TEST:119.761 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:26:54.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-37ee37f2-06ee-4ad3-85a4-901efa96a330
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-37ee37f2-06ee-4ad3-85a4-901efa96a330
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:27:00.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3432" for this suite.
Aug 24 04:27:22.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:27:22.781: INFO: namespace projected-3432 deletion completed in 22.156392778s

• [SLOW TEST:28.378 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:27:22.784: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Aug 24 04:27:26.892: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-e260728e-1e79-4ccb-aa6d-e1706c2e32b8,GenerateName:,Namespace:events-3155,SelfLink:/api/v1/namespaces/events-3155/pods/send-events-e260728e-1e79-4ccb-aa6d-e1706c2e32b8,UID:43d2631d-e436-49d1-bc51-2d8425f46c13,ResourceVersion:2286292,Generation:0,CreationTimestamp:2020-08-24 04:27:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 845177389,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8946d {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8946d,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-8946d true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x856a800} {node.kubernetes.io/unreachable Exists  NoExecute 0x856a820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:27:22 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:27:26 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:27:26 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:27:22 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.188,StartTime:2020-08-24 04:27:22 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-24 04:27:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://ddf7f039d0a8a33dbb6fa25b48e7651292e0d8923c48807272e11e879f2af779}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Aug 24 04:27:28.901: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Aug 24 04:27:30.909: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:27:30.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-3155" for this suite.
Aug 24 04:28:09.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:28:09.120: INFO: namespace events-3155 deletion completed in 38.159806054s

• [SLOW TEST:46.337 seconds]
[k8s.io] [sig-node] Events
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:28:09.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-d5c0bc64-bbc1-40ed-942c-f66c817e73f6
STEP: Creating a pod to test consume configMaps
Aug 24 04:28:09.239: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1a5c5f22-98ec-49f1-9446-fd852cf212da" in namespace "projected-2666" to be "success or failure"
Aug 24 04:28:09.258: INFO: Pod "pod-projected-configmaps-1a5c5f22-98ec-49f1-9446-fd852cf212da": Phase="Pending", Reason="", readiness=false. Elapsed: 18.709053ms
Aug 24 04:28:11.265: INFO: Pod "pod-projected-configmaps-1a5c5f22-98ec-49f1-9446-fd852cf212da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025885312s
Aug 24 04:28:13.271: INFO: Pod "pod-projected-configmaps-1a5c5f22-98ec-49f1-9446-fd852cf212da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031875449s
Aug 24 04:28:15.278: INFO: Pod "pod-projected-configmaps-1a5c5f22-98ec-49f1-9446-fd852cf212da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.038521251s
STEP: Saw pod success
Aug 24 04:28:15.278: INFO: Pod "pod-projected-configmaps-1a5c5f22-98ec-49f1-9446-fd852cf212da" satisfied condition "success or failure"
Aug 24 04:28:15.284: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-1a5c5f22-98ec-49f1-9446-fd852cf212da container projected-configmap-volume-test: 
STEP: delete the pod
Aug 24 04:28:15.310: INFO: Waiting for pod pod-projected-configmaps-1a5c5f22-98ec-49f1-9446-fd852cf212da to disappear
Aug 24 04:28:15.326: INFO: Pod pod-projected-configmaps-1a5c5f22-98ec-49f1-9446-fd852cf212da no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:28:15.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2666" for this suite.
Aug 24 04:28:21.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:28:21.457: INFO: namespace projected-2666 deletion completed in 6.123916486s

• [SLOW TEST:12.336 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:28:21.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 04:28:25.658: INFO: Waiting up to 5m0s for pod "client-envvars-95149b62-a676-4c8d-85c5-b010f585b091" in namespace "pods-1399" to be "success or failure"
Aug 24 04:28:25.664: INFO: Pod "client-envvars-95149b62-a676-4c8d-85c5-b010f585b091": Phase="Pending", Reason="", readiness=false. Elapsed: 5.565508ms
Aug 24 04:28:27.802: INFO: Pod "client-envvars-95149b62-a676-4c8d-85c5-b010f585b091": Phase="Pending", Reason="", readiness=false. Elapsed: 2.143584599s
Aug 24 04:28:29.807: INFO: Pod "client-envvars-95149b62-a676-4c8d-85c5-b010f585b091": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.148846211s
STEP: Saw pod success
Aug 24 04:28:29.808: INFO: Pod "client-envvars-95149b62-a676-4c8d-85c5-b010f585b091" satisfied condition "success or failure"
Aug 24 04:28:29.903: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-95149b62-a676-4c8d-85c5-b010f585b091 container env3cont: 
STEP: delete the pod
Aug 24 04:28:29.944: INFO: Waiting for pod client-envvars-95149b62-a676-4c8d-85c5-b010f585b091 to disappear
Aug 24 04:28:29.957: INFO: Pod client-envvars-95149b62-a676-4c8d-85c5-b010f585b091 no longer exists
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:28:29.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1399" for this suite.
Aug 24 04:29:09.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:29:10.125: INFO: namespace pods-1399 deletion completed in 40.159280439s

• [SLOW TEST:48.666 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:29:10.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-dad35735-7727-49e3-9840-bc87e531cd11
STEP: Creating a pod to test consume configMaps
Aug 24 04:29:10.244: INFO: Waiting up to 5m0s for pod "pod-configmaps-bb4896a0-4511-44b4-889a-9a020c92c8d4" in namespace "configmap-4464" to be "success or failure"
Aug 24 04:29:10.265: INFO: Pod "pod-configmaps-bb4896a0-4511-44b4-889a-9a020c92c8d4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.885752ms
Aug 24 04:29:12.305: INFO: Pod "pod-configmaps-bb4896a0-4511-44b4-889a-9a020c92c8d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0607439s
Aug 24 04:29:14.313: INFO: Pod "pod-configmaps-bb4896a0-4511-44b4-889a-9a020c92c8d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068165545s
STEP: Saw pod success
Aug 24 04:29:14.313: INFO: Pod "pod-configmaps-bb4896a0-4511-44b4-889a-9a020c92c8d4" satisfied condition "success or failure"
Aug 24 04:29:14.319: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-bb4896a0-4511-44b4-889a-9a020c92c8d4 container configmap-volume-test: 
STEP: delete the pod
Aug 24 04:29:14.370: INFO: Waiting for pod pod-configmaps-bb4896a0-4511-44b4-889a-9a020c92c8d4 to disappear
Aug 24 04:29:14.419: INFO: Pod pod-configmaps-bb4896a0-4511-44b4-889a-9a020c92c8d4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:29:14.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4464" for this suite.
Aug 24 04:29:20.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:29:20.611: INFO: namespace configmap-4464 deletion completed in 6.181236145s

• [SLOW TEST:10.485 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:29:20.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Aug 24 04:29:21.509: INFO: Pod name wrapped-volume-race-7f7e9d11-1821-42de-8b2e-77f73e4db6b1: Found 0 pods out of 5
Aug 24 04:29:26.532: INFO: Pod name wrapped-volume-race-7f7e9d11-1821-42de-8b2e-77f73e4db6b1: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-7f7e9d11-1821-42de-8b2e-77f73e4db6b1 in namespace emptydir-wrapper-1565, will wait for the garbage collector to delete the pods
Aug 24 04:29:40.662: INFO: Deleting ReplicationController wrapped-volume-race-7f7e9d11-1821-42de-8b2e-77f73e4db6b1 took: 9.437004ms
Aug 24 04:29:40.963: INFO: Terminating ReplicationController wrapped-volume-race-7f7e9d11-1821-42de-8b2e-77f73e4db6b1 pods took: 300.992061ms
STEP: Creating RC which spawns configmap-volume pods
Aug 24 04:30:23.959: INFO: Pod name wrapped-volume-race-41c0a875-82b8-4829-b37f-28e2192c20d6: Found 0 pods out of 5
Aug 24 04:30:29.019: INFO: Pod name wrapped-volume-race-41c0a875-82b8-4829-b37f-28e2192c20d6: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-41c0a875-82b8-4829-b37f-28e2192c20d6 in namespace emptydir-wrapper-1565, will wait for the garbage collector to delete the pods
Aug 24 04:30:45.352: INFO: Deleting ReplicationController wrapped-volume-race-41c0a875-82b8-4829-b37f-28e2192c20d6 took: 10.700085ms
Aug 24 04:30:45.653: INFO: Terminating ReplicationController wrapped-volume-race-41c0a875-82b8-4829-b37f-28e2192c20d6 pods took: 301.047484ms
STEP: Creating RC which spawns configmap-volume pods
Aug 24 04:31:33.600: INFO: Pod name wrapped-volume-race-d0c44d76-9d62-4129-8d0a-70dcc0b5128f: Found 0 pods out of 5
Aug 24 04:31:38.619: INFO: Pod name wrapped-volume-race-d0c44d76-9d62-4129-8d0a-70dcc0b5128f: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d0c44d76-9d62-4129-8d0a-70dcc0b5128f in namespace emptydir-wrapper-1565, will wait for the garbage collector to delete the pods
Aug 24 04:31:54.784: INFO: Deleting ReplicationController wrapped-volume-race-d0c44d76-9d62-4129-8d0a-70dcc0b5128f took: 38.026937ms
Aug 24 04:31:55.185: INFO: Terminating ReplicationController wrapped-volume-race-d0c44d76-9d62-4129-8d0a-70dcc0b5128f pods took: 401.075038ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:32:35.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1565" for this suite.
Aug 24 04:32:47.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:32:47.720: INFO: namespace emptydir-wrapper-1565 deletion completed in 12.147794256s

• [SLOW TEST:207.108 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:32:47.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-c2963ee4-d5bd-4931-b14a-4ddadbe0cd33
STEP: Creating a pod to test consume configMaps
Aug 24 04:32:47.827: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-51ba75db-8059-4fea-b173-7eaa17eac8f2" in namespace "projected-207" to be "success or failure"
Aug 24 04:32:47.862: INFO: Pod "pod-projected-configmaps-51ba75db-8059-4fea-b173-7eaa17eac8f2": Phase="Pending", Reason="", readiness=false. Elapsed: 34.458612ms
Aug 24 04:32:49.869: INFO: Pod "pod-projected-configmaps-51ba75db-8059-4fea-b173-7eaa17eac8f2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04207616s
Aug 24 04:32:51.877: INFO: Pod "pod-projected-configmaps-51ba75db-8059-4fea-b173-7eaa17eac8f2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049513787s
STEP: Saw pod success
Aug 24 04:32:51.877: INFO: Pod "pod-projected-configmaps-51ba75db-8059-4fea-b173-7eaa17eac8f2" satisfied condition "success or failure"
Aug 24 04:32:51.883: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-51ba75db-8059-4fea-b173-7eaa17eac8f2 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 24 04:32:51.911: INFO: Waiting for pod pod-projected-configmaps-51ba75db-8059-4fea-b173-7eaa17eac8f2 to disappear
Aug 24 04:32:51.950: INFO: Pod pod-projected-configmaps-51ba75db-8059-4fea-b173-7eaa17eac8f2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:32:51.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-207" for this suite.
Aug 24 04:32:57.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:32:58.133: INFO: namespace projected-207 deletion completed in 6.171059011s

• [SLOW TEST:10.412 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:32:58.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 04:32:58.235: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Aug 24 04:32:59.368: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:32:59.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3835" for this suite.
Aug 24 04:33:05.512: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:33:05.635: INFO: namespace replication-controller-3835 deletion completed in 6.145178463s

• [SLOW TEST:7.500 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:33:05.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 24 04:33:06.015: INFO: Waiting up to 5m0s for pod "pod-829a1c2f-194a-487b-8e5d-579e7fd93b2d" in namespace "emptydir-7207" to be "success or failure"
Aug 24 04:33:06.047: INFO: Pod "pod-829a1c2f-194a-487b-8e5d-579e7fd93b2d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.719119ms
Aug 24 04:33:08.054: INFO: Pod "pod-829a1c2f-194a-487b-8e5d-579e7fd93b2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038917058s
Aug 24 04:33:10.446: INFO: Pod "pod-829a1c2f-194a-487b-8e5d-579e7fd93b2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.430384018s
STEP: Saw pod success
Aug 24 04:33:10.446: INFO: Pod "pod-829a1c2f-194a-487b-8e5d-579e7fd93b2d" satisfied condition "success or failure"
Aug 24 04:33:10.741: INFO: Trying to get logs from node iruya-worker2 pod pod-829a1c2f-194a-487b-8e5d-579e7fd93b2d container test-container: 
STEP: delete the pod
Aug 24 04:33:10.783: INFO: Waiting for pod pod-829a1c2f-194a-487b-8e5d-579e7fd93b2d to disappear
Aug 24 04:33:10.800: INFO: Pod pod-829a1c2f-194a-487b-8e5d-579e7fd93b2d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:33:10.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7207" for this suite.
Aug 24 04:33:17.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:33:17.240: INFO: namespace emptydir-7207 deletion completed in 6.428779827s

• [SLOW TEST:11.605 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:33:17.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 24 04:33:22.008: INFO: Successfully updated pod "annotationupdate537a9aea-571b-49e3-92f5-60538571d67f"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:33:24.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9899" for this suite.
Aug 24 04:33:46.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:33:46.428: INFO: namespace downward-api-9899 deletion completed in 22.160496974s

• [SLOW TEST:29.183 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:33:46.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 24 04:33:46.585: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-328,SelfLink:/api/v1/namespaces/watch-328/configmaps/e2e-watch-test-watch-closed,UID:da49a28a-1308-4f2e-9176-49386444f6c8,ResourceVersion:2288117,Generation:0,CreationTimestamp:2020-08-24 04:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 24 04:33:46.586: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-328,SelfLink:/api/v1/namespaces/watch-328/configmaps/e2e-watch-test-watch-closed,UID:da49a28a-1308-4f2e-9176-49386444f6c8,ResourceVersion:2288118,Generation:0,CreationTimestamp:2020-08-24 04:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 24 04:33:46.600: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-328,SelfLink:/api/v1/namespaces/watch-328/configmaps/e2e-watch-test-watch-closed,UID:da49a28a-1308-4f2e-9176-49386444f6c8,ResourceVersion:2288120,Generation:0,CreationTimestamp:2020-08-24 04:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 24 04:33:46.601: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-328,SelfLink:/api/v1/namespaces/watch-328/configmaps/e2e-watch-test-watch-closed,UID:da49a28a-1308-4f2e-9176-49386444f6c8,ResourceVersion:2288121,Generation:0,CreationTimestamp:2020-08-24 04:33:46 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:33:46.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-328" for this suite.
Aug 24 04:33:52.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:33:52.868: INFO: namespace watch-328 deletion completed in 6.241915418s

• [SLOW TEST:6.437 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:33:52.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 04:33:53.046: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9f3c2985-33f9-4c8d-a828-64aedda86916" in namespace "downward-api-5837" to be "success or failure"
Aug 24 04:33:53.053: INFO: Pod "downwardapi-volume-9f3c2985-33f9-4c8d-a828-64aedda86916": Phase="Pending", Reason="", readiness=false. Elapsed: 7.573689ms
Aug 24 04:33:55.071: INFO: Pod "downwardapi-volume-9f3c2985-33f9-4c8d-a828-64aedda86916": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025514467s
Aug 24 04:33:57.078: INFO: Pod "downwardapi-volume-9f3c2985-33f9-4c8d-a828-64aedda86916": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032747375s
STEP: Saw pod success
Aug 24 04:33:57.079: INFO: Pod "downwardapi-volume-9f3c2985-33f9-4c8d-a828-64aedda86916" satisfied condition "success or failure"
Aug 24 04:33:57.084: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-9f3c2985-33f9-4c8d-a828-64aedda86916 container client-container: 
STEP: delete the pod
Aug 24 04:33:57.143: INFO: Waiting for pod downwardapi-volume-9f3c2985-33f9-4c8d-a828-64aedda86916 to disappear
Aug 24 04:33:57.166: INFO: Pod downwardapi-volume-9f3c2985-33f9-4c8d-a828-64aedda86916 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:33:57.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5837" for this suite.
Aug 24 04:34:03.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:34:03.536: INFO: namespace downward-api-5837 deletion completed in 6.359501597s

• [SLOW TEST:10.667 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:34:03.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Aug 24 04:34:03.622: INFO: Pod name pod-release: Found 0 pods out of 1
Aug 24 04:34:08.630: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:34:08.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-7196" for this suite.
Aug 24 04:34:14.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:34:15.163: INFO: namespace replication-controller-7196 deletion completed in 6.449766644s

• [SLOW TEST:11.624 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:34:15.166: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-6640/configmap-test-a3618e2c-5a2e-42f9-8fbb-33286bdaf40d
STEP: Creating a pod to test consume configMaps
Aug 24 04:34:15.256: INFO: Waiting up to 5m0s for pod "pod-configmaps-01593a82-0171-4975-b338-a1195e2c9fac" in namespace "configmap-6640" to be "success or failure"
Aug 24 04:34:15.317: INFO: Pod "pod-configmaps-01593a82-0171-4975-b338-a1195e2c9fac": Phase="Pending", Reason="", readiness=false. Elapsed: 60.629818ms
Aug 24 04:34:17.324: INFO: Pod "pod-configmaps-01593a82-0171-4975-b338-a1195e2c9fac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067963719s
Aug 24 04:34:19.331: INFO: Pod "pod-configmaps-01593a82-0171-4975-b338-a1195e2c9fac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075021719s
STEP: Saw pod success
Aug 24 04:34:19.332: INFO: Pod "pod-configmaps-01593a82-0171-4975-b338-a1195e2c9fac" satisfied condition "success or failure"
Aug 24 04:34:19.337: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-01593a82-0171-4975-b338-a1195e2c9fac container env-test: 
STEP: delete the pod
Aug 24 04:34:19.475: INFO: Waiting for pod pod-configmaps-01593a82-0171-4975-b338-a1195e2c9fac to disappear
Aug 24 04:34:19.627: INFO: Pod pod-configmaps-01593a82-0171-4975-b338-a1195e2c9fac no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:34:19.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6640" for this suite.
Aug 24 04:34:25.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:34:25.806: INFO: namespace configmap-6640 deletion completed in 6.169152295s

• [SLOW TEST:10.640 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:34:25.810: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-3ab5dfc0-4c7b-42bd-91c6-8bcfb216f916 in namespace container-probe-180
Aug 24 04:34:29.987: INFO: Started pod busybox-3ab5dfc0-4c7b-42bd-91c6-8bcfb216f916 in namespace container-probe-180
STEP: checking the pod's current state and verifying that restartCount is present
Aug 24 04:34:29.993: INFO: Initial restart count of pod busybox-3ab5dfc0-4c7b-42bd-91c6-8bcfb216f916 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:38:30.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-180" for this suite.
Aug 24 04:38:36.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:38:37.202: INFO: namespace container-probe-180 deletion completed in 6.23834509s

• [SLOW TEST:251.392 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:38:37.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 24 04:38:37.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-3118'
Aug 24 04:38:41.249: INFO: stderr: ""
Aug 24 04:38:41.249: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Aug 24 04:38:46.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-3118 -o json'
Aug 24 04:38:47.411: INFO: stderr: ""
Aug 24 04:38:47.412: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-08-24T04:38:41Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-3118\",\n        \"resourceVersion\": \"2288835\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-3118/pods/e2e-test-nginx-pod\",\n        \"uid\": \"836aebcc-2123-40f5-b806-b4d2916c9e6e\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-qxl9m\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-worker2\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-qxl9m\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-qxl9m\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-24T04:38:41Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-24T04:38:44Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-24T04:38:44Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-08-24T04:38:41Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"containerd://b5e96a618746ee128c9608ca740e34472ec2625a296d9a23c550d9d3a90b3ed9\",\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-08-24T04:38:43Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"172.18.0.5\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.244.2.183\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-08-24T04:38:41Z\"\n    }\n}\n"
STEP: replace the image in the pod
Aug 24 04:38:47.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3118'
Aug 24 04:38:48.931: INFO: stderr: ""
Aug 24 04:38:48.931: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Aug 24 04:38:48.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3118'
Aug 24 04:39:03.782: INFO: stderr: ""
Aug 24 04:39:03.782: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:39:03.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3118" for this suite.
Aug 24 04:39:09.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:39:09.958: INFO: namespace kubectl-3118 deletion completed in 6.160526002s

• [SLOW TEST:32.755 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:39:09.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8618
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-8618
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8618
Aug 24 04:39:10.168: INFO: Found 0 stateful pods, waiting for 1
Aug 24 04:39:20.178: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Aug 24 04:39:20.186: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 24 04:39:21.664: INFO: stderr: "I0824 04:39:21.474185     617 log.go:172] (0x2b08540) (0x2b085b0) Create stream\nI0824 04:39:21.478377     617 log.go:172] (0x2b08540) (0x2b085b0) Stream added, broadcasting: 1\nI0824 04:39:21.494456     617 log.go:172] (0x2b08540) Reply frame received for 1\nI0824 04:39:21.495071     617 log.go:172] (0x2b08540) (0x2b08620) Create stream\nI0824 04:39:21.495152     617 log.go:172] (0x2b08540) (0x2b08620) Stream added, broadcasting: 3\nI0824 04:39:21.496518     617 log.go:172] (0x2b08540) Reply frame received for 3\nI0824 04:39:21.496819     617 log.go:172] (0x2b08540) (0x24ac620) Create stream\nI0824 04:39:21.496880     617 log.go:172] (0x2b08540) (0x24ac620) Stream added, broadcasting: 5\nI0824 04:39:21.497861     617 log.go:172] (0x2b08540) Reply frame received for 5\nI0824 04:39:21.601819     617 log.go:172] (0x2b08540) Data frame received for 5\nI0824 04:39:21.602125     617 log.go:172] (0x24ac620) (5) Data frame handling\nI0824 04:39:21.602771     617 log.go:172] (0x24ac620) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0824 04:39:21.641072     617 log.go:172] (0x2b08540) Data frame received for 3\nI0824 04:39:21.641318     617 log.go:172] (0x2b08620) (3) Data frame handling\nI0824 04:39:21.641545     617 log.go:172] (0x2b08620) (3) Data frame sent\nI0824 04:39:21.641766     617 log.go:172] (0x2b08540) Data frame received for 3\nI0824 04:39:21.641944     617 log.go:172] (0x2b08540) Data frame received for 5\nI0824 04:39:21.642224     617 log.go:172] (0x24ac620) (5) Data frame handling\nI0824 04:39:21.642376     617 log.go:172] (0x2b08620) (3) Data frame handling\nI0824 04:39:21.642865     617 log.go:172] (0x2b08540) Data frame received for 1\nI0824 04:39:21.643027     617 log.go:172] (0x2b085b0) (1) Data frame handling\nI0824 04:39:21.643262     617 log.go:172] (0x2b085b0) (1) Data frame sent\nI0824 04:39:21.645313     617 log.go:172] (0x2b08540) (0x2b085b0) Stream removed, broadcasting: 1\nI0824 04:39:21.649406     617 log.go:172] (0x2b08540) Go away received\nI0824 04:39:21.651343     617 log.go:172] (0x2b08540) (0x2b085b0) Stream removed, broadcasting: 1\nI0824 04:39:21.651718     617 log.go:172] (0x2b08540) (0x2b08620) Stream removed, broadcasting: 3\nI0824 04:39:21.652073     617 log.go:172] (0x2b08540) (0x24ac620) Stream removed, broadcasting: 5\n"
Aug 24 04:39:21.665: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 24 04:39:21.666: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 24 04:39:21.672: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 24 04:39:31.680: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 24 04:39:31.680: INFO: Waiting for statefulset status.replicas updated to 0
Aug 24 04:39:31.725: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999905306s
Aug 24 04:39:32.732: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.98874502s
Aug 24 04:39:33.794: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.981152463s
Aug 24 04:39:34.802: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.919803943s
Aug 24 04:39:35.814: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.911672436s
Aug 24 04:39:36.823: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.899363733s
Aug 24 04:39:37.830: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.89051488s
Aug 24 04:39:38.838: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.883416999s
Aug 24 04:39:39.853: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.875859521s
Aug 24 04:39:40.861: INFO: Verifying statefulset ss doesn't scale past 1 for another 860.169925ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8618
Aug 24 04:39:41.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:39:43.274: INFO: stderr: "I0824 04:39:43.143118     638 log.go:172] (0x2ac47e0) (0x2ac4850) Create stream\nI0824 04:39:43.146065     638 log.go:172] (0x2ac47e0) (0x2ac4850) Stream added, broadcasting: 1\nI0824 04:39:43.161467     638 log.go:172] (0x2ac47e0) Reply frame received for 1\nI0824 04:39:43.162047     638 log.go:172] (0x2ac47e0) (0x28de1c0) Create stream\nI0824 04:39:43.162127     638 log.go:172] (0x2ac47e0) (0x28de1c0) Stream added, broadcasting: 3\nI0824 04:39:43.163385     638 log.go:172] (0x2ac47e0) Reply frame received for 3\nI0824 04:39:43.163645     638 log.go:172] (0x2ac47e0) (0x2a44000) Create stream\nI0824 04:39:43.163711     638 log.go:172] (0x2ac47e0) (0x2a44000) Stream added, broadcasting: 5\nI0824 04:39:43.165032     638 log.go:172] (0x2ac47e0) Reply frame received for 5\nI0824 04:39:43.250701     638 log.go:172] (0x2ac47e0) Data frame received for 5\nI0824 04:39:43.251437     638 log.go:172] (0x2ac47e0) Data frame received for 3\nI0824 04:39:43.251731     638 log.go:172] (0x2a44000) (5) Data frame handling\nI0824 04:39:43.252044     638 log.go:172] (0x2ac47e0) Data frame received for 1\nI0824 04:39:43.252233     638 log.go:172] (0x2ac4850) (1) Data frame handling\nI0824 04:39:43.252441     638 log.go:172] (0x28de1c0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0824 04:39:43.253207     638 log.go:172] (0x28de1c0) (3) Data frame sent\nI0824 04:39:43.254195     638 log.go:172] (0x2a44000) (5) Data frame sent\nI0824 04:39:43.254503     638 log.go:172] (0x2ac47e0) Data frame received for 5\nI0824 04:39:43.254682     638 log.go:172] (0x2a44000) (5) Data frame handling\nI0824 04:39:43.254830     638 log.go:172] (0x2ac4850) (1) Data frame sent\nI0824 04:39:43.255386     638 log.go:172] (0x2ac47e0) Data frame received for 3\nI0824 04:39:43.255515     638 log.go:172] (0x28de1c0) (3) Data frame handling\nI0824 04:39:43.256282     638 log.go:172] (0x2ac47e0) (0x2ac4850) Stream removed, broadcasting: 1\nI0824 04:39:43.256656     638 log.go:172] (0x2ac47e0) Go away received\nI0824 04:39:43.259614     638 log.go:172] (0x2ac47e0) (0x2ac4850) Stream removed, broadcasting: 1\nI0824 04:39:43.259834     638 log.go:172] (0x2ac47e0) (0x28de1c0) Stream removed, broadcasting: 3\nI0824 04:39:43.260036     638 log.go:172] (0x2ac47e0) (0x2a44000) Stream removed, broadcasting: 5\n"
Aug 24 04:39:43.275: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 24 04:39:43.275: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 24 04:39:43.281: INFO: Found 1 stateful pods, waiting for 3
Aug 24 04:39:53.293: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 04:39:53.293: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 04:39:53.293: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Aug 24 04:39:53.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 24 04:39:54.709: INFO: stderr: "I0824 04:39:54.592304     661 log.go:172] (0x28cc310) (0x28cc380) Create stream\nI0824 04:39:54.595469     661 log.go:172] (0x28cc310) (0x28cc380) Stream added, broadcasting: 1\nI0824 04:39:54.613440     661 log.go:172] (0x28cc310) Reply frame received for 1\nI0824 04:39:54.614171     661 log.go:172] (0x28cc310) (0x24ad3b0) Create stream\nI0824 04:39:54.614282     661 log.go:172] (0x28cc310) (0x24ad3b0) Stream added, broadcasting: 3\nI0824 04:39:54.615976     661 log.go:172] (0x28cc310) Reply frame received for 3\nI0824 04:39:54.616235     661 log.go:172] (0x28cc310) (0x2b16000) Create stream\nI0824 04:39:54.616294     661 log.go:172] (0x28cc310) (0x2b16000) Stream added, broadcasting: 5\nI0824 04:39:54.617396     661 log.go:172] (0x28cc310) Reply frame received for 5\nI0824 04:39:54.692397     661 log.go:172] (0x28cc310) Data frame received for 5\nI0824 04:39:54.692614     661 log.go:172] (0x28cc310) Data frame received for 3\nI0824 04:39:54.692845     661 log.go:172] (0x28cc310) Data frame received for 1\nI0824 04:39:54.693080     661 log.go:172] (0x28cc380) (1) Data frame handling\nI0824 04:39:54.693208     661 log.go:172] (0x2b16000) (5) Data frame handling\nI0824 04:39:54.693322     661 log.go:172] (0x24ad3b0) (3) Data frame handling\nI0824 04:39:54.694141     661 log.go:172] (0x28cc380) (1) Data frame sent\nI0824 04:39:54.694217     661 log.go:172] (0x2b16000) (5) Data frame sent\nI0824 04:39:54.694356     661 log.go:172] (0x24ad3b0) (3) Data frame sent\nI0824 04:39:54.694448     661 log.go:172] (0x28cc310) Data frame received for 3\nI0824 04:39:54.694500     661 log.go:172] (0x24ad3b0) (3) Data frame handling\nI0824 04:39:54.694562     661 log.go:172] (0x28cc310) Data frame received for 5\nI0824 04:39:54.694670     661 log.go:172] (0x2b16000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0824 04:39:54.696609     661 log.go:172] (0x28cc310) (0x28cc380) Stream removed, broadcasting: 1\nI0824 04:39:54.697568     661 log.go:172] (0x28cc310) Go away received\nI0824 04:39:54.699354     661 log.go:172] (0x28cc310) (0x28cc380) Stream removed, broadcasting: 1\nI0824 04:39:54.699506     661 log.go:172] (0x28cc310) (0x24ad3b0) Stream removed, broadcasting: 3\nI0824 04:39:54.699650     661 log.go:172] (0x28cc310) (0x2b16000) Stream removed, broadcasting: 5\n"
Aug 24 04:39:54.710: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 24 04:39:54.710: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 24 04:39:54.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 24 04:39:56.164: INFO: stderr: "I0824 04:39:56.017738     683 log.go:172] (0x2b21f10) (0x2b21f80) Create stream\nI0824 04:39:56.020427     683 log.go:172] (0x2b21f10) (0x2b21f80) Stream added, broadcasting: 1\nI0824 04:39:56.038018     683 log.go:172] (0x2b21f10) Reply frame received for 1\nI0824 04:39:56.038767     683 log.go:172] (0x2b21f10) (0x26aa000) Create stream\nI0824 04:39:56.038834     683 log.go:172] (0x2b21f10) (0x26aa000) Stream added, broadcasting: 3\nI0824 04:39:56.040284     683 log.go:172] (0x2b21f10) Reply frame received for 3\nI0824 04:39:56.040558     683 log.go:172] (0x2b21f10) (0x28460e0) Create stream\nI0824 04:39:56.040634     683 log.go:172] (0x2b21f10) (0x28460e0) Stream added, broadcasting: 5\nI0824 04:39:56.041904     683 log.go:172] (0x2b21f10) Reply frame received for 5\nI0824 04:39:56.116842     683 log.go:172] (0x2b21f10) Data frame received for 5\nI0824 04:39:56.117311     683 log.go:172] (0x28460e0) (5) Data frame handling\nI0824 04:39:56.118092     683 log.go:172] (0x28460e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0824 04:39:56.142117     683 log.go:172] (0x2b21f10) Data frame received for 3\nI0824 04:39:56.142285     683 log.go:172] (0x26aa000) (3) Data frame handling\nI0824 04:39:56.142457     683 log.go:172] (0x2b21f10) Data frame received for 5\nI0824 04:39:56.142635     683 log.go:172] (0x28460e0) (5) Data frame handling\nI0824 04:39:56.142752     683 log.go:172] (0x26aa000) (3) Data frame sent\nI0824 04:39:56.142898     683 log.go:172] (0x2b21f10) Data frame received for 3\nI0824 04:39:56.143053     683 log.go:172] (0x26aa000) (3) Data frame handling\nI0824 04:39:56.143938     683 log.go:172] (0x2b21f10) Data frame received for 1\nI0824 04:39:56.144066     683 log.go:172] (0x2b21f80) (1) Data frame handling\nI0824 04:39:56.144192     683 log.go:172] (0x2b21f80) (1) Data frame sent\nI0824 04:39:56.146159     683 log.go:172] (0x2b21f10) (0x2b21f80) Stream removed, broadcasting: 1\nI0824 04:39:56.147480     683 log.go:172] (0x2b21f10) Go away received\nI0824 04:39:56.150558     683 log.go:172] (0x2b21f10) (0x2b21f80) Stream removed, broadcasting: 1\nI0824 04:39:56.150778     683 log.go:172] (0x2b21f10) (0x26aa000) Stream removed, broadcasting: 3\nI0824 04:39:56.151008     683 log.go:172] (0x2b21f10) (0x28460e0) Stream removed, broadcasting: 5\n"
Aug 24 04:39:56.164: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 24 04:39:56.164: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 24 04:39:56.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 24 04:39:57.550: INFO: stderr: "I0824 04:39:57.400496     706 log.go:172] (0x2787490) (0x2787500) Create stream\nI0824 04:39:57.403198     706 log.go:172] (0x2787490) (0x2787500) Stream added, broadcasting: 1\nI0824 04:39:57.419315     706 log.go:172] (0x2787490) Reply frame received for 1\nI0824 04:39:57.419901     706 log.go:172] (0x2787490) (0x2668310) Create stream\nI0824 04:39:57.419976     706 log.go:172] (0x2787490) (0x2668310) Stream added, broadcasting: 3\nI0824 04:39:57.421433     706 log.go:172] (0x2787490) Reply frame received for 3\nI0824 04:39:57.421817     706 log.go:172] (0x2787490) (0x24a47e0) Create stream\nI0824 04:39:57.421943     706 log.go:172] (0x2787490) (0x24a47e0) Stream added, broadcasting: 5\nI0824 04:39:57.423169     706 log.go:172] (0x2787490) Reply frame received for 5\nI0824 04:39:57.472609     706 log.go:172] (0x2787490) Data frame received for 5\nI0824 04:39:57.473056     706 log.go:172] (0x24a47e0) (5) Data frame handling\nI0824 04:39:57.473751     706 log.go:172] (0x24a47e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0824 04:39:57.529142     706 log.go:172] (0x2787490) Data frame received for 3\nI0824 04:39:57.529362     706 log.go:172] (0x2668310) (3) Data frame handling\nI0824 04:39:57.529479     706 log.go:172] (0x2668310) (3) Data frame sent\nI0824 04:39:57.529576     706 log.go:172] (0x2787490) Data frame received for 3\nI0824 04:39:57.529646     706 log.go:172] (0x2668310) (3) Data frame handling\nI0824 04:39:57.529846     706 log.go:172] (0x2787490) Data frame received for 5\nI0824 04:39:57.530047     706 log.go:172] (0x24a47e0) (5) Data frame handling\nI0824 04:39:57.530235     706 log.go:172] (0x2787490) Data frame received for 1\nI0824 04:39:57.530323     706 log.go:172] (0x2787500) (1) Data frame handling\nI0824 04:39:57.530407     706 log.go:172] (0x2787500) (1) Data frame sent\nI0824 04:39:57.531233     706 log.go:172] (0x2787490) (0x2787500) Stream removed, broadcasting: 1\nI0824 04:39:57.533697     706 log.go:172] (0x2787490) Go away received\nI0824 04:39:57.536400     706 log.go:172] (0x2787490) (0x2787500) Stream removed, broadcasting: 1\nI0824 04:39:57.536594     706 log.go:172] (0x2787490) (0x2668310) Stream removed, broadcasting: 3\nI0824 04:39:57.536887     706 log.go:172] (0x2787490) (0x24a47e0) Stream removed, broadcasting: 5\n"
Aug 24 04:39:57.551: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 24 04:39:57.551: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 24 04:39:57.551: INFO: Waiting for statefulset status.replicas updated to 0
Aug 24 04:39:57.555: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 24 04:40:07.582: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 24 04:40:07.583: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 24 04:40:07.583: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 24 04:40:07.605: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999990149s
Aug 24 04:40:08.648: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984993193s
Aug 24 04:40:09.657: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.941057227s
Aug 24 04:40:10.666: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.932890379s
Aug 24 04:40:11.674: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.923237187s
Aug 24 04:40:12.684: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.915954753s
Aug 24 04:40:13.693: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.905744977s
Aug 24 04:40:14.704: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.896332974s
Aug 24 04:40:15.713: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.88516423s
Aug 24 04:40:16.723: INFO: Verifying statefulset ss doesn't scale past 3 for another 876.083511ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8618
Aug 24 04:40:17.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:40:19.121: INFO: stderr: "I0824 04:40:19.010558     729 log.go:172] (0x2991f10) (0x2991f80) Create stream\nI0824 04:40:19.014004     729 log.go:172] (0x2991f10) (0x2991f80) Stream added, broadcasting: 1\nI0824 04:40:19.035348     729 log.go:172] (0x2991f10) Reply frame received for 1\nI0824 04:40:19.035822     729 log.go:172] (0x2991f10) (0x24160e0) Create stream\nI0824 04:40:19.035888     729 log.go:172] (0x2991f10) (0x24160e0) Stream added, broadcasting: 3\nI0824 04:40:19.037274     729 log.go:172] (0x2991f10) Reply frame received for 3\nI0824 04:40:19.037575     729 log.go:172] (0x2991f10) (0x24b8770) Create stream\nI0824 04:40:19.037650     729 log.go:172] (0x2991f10) (0x24b8770) Stream added, broadcasting: 5\nI0824 04:40:19.038895     729 log.go:172] (0x2991f10) Reply frame received for 5\nI0824 04:40:19.105078     729 log.go:172] (0x2991f10) Data frame received for 5\nI0824 04:40:19.105303     729 log.go:172] (0x2991f10) Data frame received for 3\nI0824 04:40:19.105473     729 log.go:172] (0x24b8770) (5) Data frame handling\nI0824 04:40:19.105576     729 log.go:172] (0x2991f10) Data frame received for 1\nI0824 04:40:19.105737     729 log.go:172] (0x2991f80) (1) Data frame handling\nI0824 04:40:19.105914     729 log.go:172] (0x24160e0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0824 04:40:19.106792     729 log.go:172] (0x24b8770) (5) Data frame sent\nI0824 04:40:19.106938     729 log.go:172] (0x24160e0) (3) Data frame sent\nI0824 04:40:19.107173     729 log.go:172] (0x2991f80) (1) Data frame sent\nI0824 04:40:19.107415     729 log.go:172] (0x2991f10) Data frame received for 5\nI0824 04:40:19.107544     729 log.go:172] (0x24b8770) (5) Data frame handling\nI0824 04:40:19.107726     729 log.go:172] (0x2991f10) Data frame received for 3\nI0824 04:40:19.108505     729 log.go:172] (0x2991f10) (0x2991f80) Stream removed, broadcasting: 1\nI0824 04:40:19.109432     729 log.go:172] (0x24160e0) (3) Data frame handling\nI0824 04:40:19.110664     729 log.go:172] (0x2991f10) Go away received\nI0824 04:40:19.112701     729 log.go:172] (0x2991f10) (0x2991f80) Stream removed, broadcasting: 1\nI0824 04:40:19.113009     729 log.go:172] (0x2991f10) (0x24160e0) Stream removed, broadcasting: 3\nI0824 04:40:19.113194     729 log.go:172] (0x2991f10) (0x24b8770) Stream removed, broadcasting: 5\n"
Aug 24 04:40:19.122: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 24 04:40:19.122: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 24 04:40:19.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:40:20.560: INFO: stderr: "I0824 04:40:20.416370     753 log.go:172] (0x27503f0) (0x2750460) Create stream\nI0824 04:40:20.419354     753 log.go:172] (0x27503f0) (0x2750460) Stream added, broadcasting: 1\nI0824 04:40:20.438499     753 log.go:172] (0x27503f0) Reply frame received for 1\nI0824 04:40:20.440228     753 log.go:172] (0x27503f0) (0x281c230) Create stream\nI0824 04:40:20.440375     753 log.go:172] (0x27503f0) (0x281c230) Stream added, broadcasting: 3\nI0824 04:40:20.442209     753 log.go:172] (0x27503f0) Reply frame received for 3\nI0824 04:40:20.442487     753 log.go:172] (0x27503f0) (0x281c700) Create stream\nI0824 04:40:20.442555     753 log.go:172] (0x27503f0) (0x281c700) Stream added, broadcasting: 5\nI0824 04:40:20.443714     753 log.go:172] (0x27503f0) Reply frame received for 5\nI0824 04:40:20.537901     753 log.go:172] (0x27503f0) Data frame received for 3\nI0824 04:40:20.538410     753 log.go:172] (0x27503f0) Data frame received for 5\nI0824 04:40:20.538932     753 log.go:172] (0x281c700) (5) Data frame handling\nI0824 04:40:20.539378     753 log.go:172] (0x281c230) (3) Data frame handling\nI0824 04:40:20.539834     753 log.go:172] (0x27503f0) Data frame received for 1\nI0824 04:40:20.540094     753 log.go:172] (0x2750460) (1) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0824 04:40:20.542347     753 log.go:172] (0x2750460) (1) Data frame sent\nI0824 04:40:20.542895     753 log.go:172] (0x281c230) (3) Data frame sent\nI0824 04:40:20.543116     753 log.go:172] (0x27503f0) Data frame received for 3\nI0824 04:40:20.543182     753 log.go:172] (0x281c230) (3) Data frame handling\nI0824 04:40:20.543284     753 log.go:172] (0x281c700) (5) Data frame sent\nI0824 04:40:20.543439     753 log.go:172] (0x27503f0) Data frame received for 5\nI0824 04:40:20.543553     753 log.go:172] (0x281c700) (5) Data frame handling\nI0824 04:40:20.545233     753 log.go:172] (0x27503f0) (0x2750460) Stream removed, broadcasting: 1\nI0824 04:40:20.545575     753 log.go:172] (0x27503f0) Go away received\nI0824 04:40:20.548035     753 log.go:172] (0x27503f0) (0x2750460) Stream removed, broadcasting: 1\nI0824 04:40:20.548251     753 log.go:172] (0x27503f0) (0x281c230) Stream removed, broadcasting: 3\nI0824 04:40:20.548435     753 log.go:172] (0x27503f0) (0x281c700) Stream removed, broadcasting: 5\n"
Aug 24 04:40:20.561: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 24 04:40:20.561: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 24 04:40:20.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:40:22.292: INFO: rc: 1
Aug 24 04:40:22.295: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    I0824 04:40:21.861076     775 log.go:172] (0x2a7a3f0) (0x2a7a000) Create stream
I0824 04:40:21.863793     775 log.go:172] (0x2a7a3f0) (0x2a7a000) Stream added, broadcasting: 1
I0824 04:40:21.874945     775 log.go:172] (0x2a7a3f0) Reply frame received for 1
I0824 04:40:21.875369     775 log.go:172] (0x2a7a3f0) (0x2a7a070) Create stream
I0824 04:40:21.875430     775 log.go:172] (0x2a7a3f0) (0x2a7a070) Stream added, broadcasting: 3
I0824 04:40:21.876612     775 log.go:172] (0x2a7a3f0) Reply frame received for 3
I0824 04:40:21.876913     775 log.go:172] (0x2a7a3f0) (0x24be850) Create stream
I0824 04:40:21.876986     775 log.go:172] (0x2a7a3f0) (0x24be850) Stream added, broadcasting: 5
I0824 04:40:21.877965     775 log.go:172] (0x2a7a3f0) Reply frame received for 5
I0824 04:40:22.267654     775 log.go:172] (0x2a7a3f0) Data frame received for 1
I0824 04:40:22.267998     775 log.go:172] (0x2a7a000) (1) Data frame handling
I0824 04:40:22.269321     775 log.go:172] (0x2a7a3f0) (0x2a7a070) Stream removed, broadcasting: 3
I0824 04:40:22.271902     775 log.go:172] (0x2a7a000) (1) Data frame sent
I0824 04:40:22.272666     775 log.go:172] (0x2a7a3f0) (0x24be850) Stream removed, broadcasting: 5
I0824 04:40:22.273816     775 log.go:172] (0x2a7a3f0) (0x2a7a000) Stream removed, broadcasting: 1
I0824 04:40:22.274308     775 log.go:172] (0x2a7a3f0) Go away received
I0824 04:40:22.277571     775 log.go:172] (0x2a7a3f0) (0x2a7a000) Stream removed, broadcasting: 1
I0824 04:40:22.277726     775 log.go:172] (0x2a7a3f0) (0x2a7a070) Stream removed, broadcasting: 3
I0824 04:40:22.277812     775 log.go:172] (0x2a7a3f0) (0x24be850) Stream removed, broadcasting: 5
error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "1fdccc474d942be72339d19afbdb7824be3a0566c78ee08545414b1c3ad59be1": container not created: not found
 []  0x8ffdb90 exit status 1   true [0x7fc8880 0x7fc88a0 0x7fc88c0] [0x7fc8880 0x7fc88a0 0x7fc88c0] [0x7fc8898 0x7fc88b8] [0x6bbb70 0x6bbb70] 0x8b05080 }:
Command stdout:

stderr:
I0824 04:40:21.861076     775 log.go:172] (0x2a7a3f0) (0x2a7a000) Create stream
I0824 04:40:21.863793     775 log.go:172] (0x2a7a3f0) (0x2a7a000) Stream added, broadcasting: 1
I0824 04:40:21.874945     775 log.go:172] (0x2a7a3f0) Reply frame received for 1
I0824 04:40:21.875369     775 log.go:172] (0x2a7a3f0) (0x2a7a070) Create stream
I0824 04:40:21.875430     775 log.go:172] (0x2a7a3f0) (0x2a7a070) Stream added, broadcasting: 3
I0824 04:40:21.876612     775 log.go:172] (0x2a7a3f0) Reply frame received for 3
I0824 04:40:21.876913     775 log.go:172] (0x2a7a3f0) (0x24be850) Create stream
I0824 04:40:21.876986     775 log.go:172] (0x2a7a3f0) (0x24be850) Stream added, broadcasting: 5
I0824 04:40:21.877965     775 log.go:172] (0x2a7a3f0) Reply frame received for 5
I0824 04:40:22.267654     775 log.go:172] (0x2a7a3f0) Data frame received for 1
I0824 04:40:22.267998     775 log.go:172] (0x2a7a000) (1) Data frame handling
I0824 04:40:22.269321     775 log.go:172] (0x2a7a3f0) (0x2a7a070) Stream removed, broadcasting: 3
I0824 04:40:22.271902     775 log.go:172] (0x2a7a000) (1) Data frame sent
I0824 04:40:22.272666     775 log.go:172] (0x2a7a3f0) (0x24be850) Stream removed, broadcasting: 5
I0824 04:40:22.273816     775 log.go:172] (0x2a7a3f0) (0x2a7a000) Stream removed, broadcasting: 1
I0824 04:40:22.274308     775 log.go:172] (0x2a7a3f0) Go away received
I0824 04:40:22.277571     775 log.go:172] (0x2a7a3f0) (0x2a7a000) Stream removed, broadcasting: 1
I0824 04:40:22.277726     775 log.go:172] (0x2a7a3f0) (0x2a7a070) Stream removed, broadcasting: 3
I0824 04:40:22.277812     775 log.go:172] (0x2a7a3f0) (0x24be850) Stream removed, broadcasting: 5
error: Internal error occurred: error executing command in container: failed to exec in container: failed to create exec "1fdccc474d942be72339d19afbdb7824be3a0566c78ee08545414b1c3ad59be1": container not created: not found

error:
exit status 1
Aug 24 04:40:32.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:40:33.442: INFO: rc: 1
Aug 24 04:40:33.442: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x8ffdc80 exit status 1   true [0x7fc8960 0x7fc8980 0x7fc89a0] [0x7fc8960 0x7fc8980 0x7fc89a0] [0x7fc8978 0x7fc8998] [0x6bbb70 0x6bbb70] 0x8b05480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:40:43.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:40:44.575: INFO: rc: 1
Aug 24 04:40:44.575: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x943a0c0 exit status 1   true [0x80e8518 0x80e8538 0x80e8558] [0x80e8518 0x80e8538 0x80e8558] [0x80e8530 0x80e8550] [0x6bbb70 0x6bbb70] 0x81a3280 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:40:54.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:40:55.765: INFO: rc: 1
Aug 24 04:40:55.765: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x8ffdd70 exit status 1   true [0x7fc8a40 0x7fc8a60 0x7fc8a80] [0x7fc8a40 0x7fc8a60 0x7fc8a80] [0x7fc8a58 0x7fc8a78] [0x6bbb70 0x6bbb70] 0x8b05740 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:41:05.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:41:06.875: INFO: rc: 1
Aug 24 04:41:06.876: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x943a1b0 exit status 1   true [0x80e85f8 0x80e8618 0x80e8638] [0x80e85f8 0x80e8618 0x80e8638] [0x80e8610 0x80e8630] [0x6bbb70 0x6bbb70] 0x81a3800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:41:16.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:41:18.032: INFO: rc: 1
Aug 24 04:41:18.032: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x943a270 exit status 1   true [0x80e8670 0x80e8690 0x80e86b0] [0x80e8670 0x80e8690 0x80e86b0] [0x80e8688 0x80e86a8] [0x6bbb70 0x6bbb70] 0x81a3bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:41:28.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:41:29.147: INFO: rc: 1
Aug 24 04:41:29.148: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x89e8090 exit status 1   true [0x7518028 0x7518048 0x7518068] [0x7518028 0x7518048 0x7518068] [0x7518040 0x7518060] [0x6bbb70 0x6bbb70] 0x891c3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:41:39.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:41:40.266: INFO: rc: 1
Aug 24 04:41:40.267: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x943a360 exit status 1   true [0x80e8750 0x80e8770 0x80e8790] [0x80e8750 0x80e8770 0x80e8790] [0x80e8768 0x80e8788] [0x6bbb70 0x6bbb70] 0x88320c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:41:50.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:41:51.404: INFO: rc: 1
Aug 24 04:41:51.405: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x8ffdec0 exit status 1   true [0x7fc8bf0 0x7fc8c10 0x7fc8c30] [0x7fc8bf0 0x7fc8c10 0x7fc8c30] [0x7fc8c08 0x7fc8c28] [0x6bbb70 0x6bbb70] 0x8b05b80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:42:01.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:42:02.559: INFO: rc: 1
Aug 24 04:42:02.560: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x943a450 exit status 1   true [0x80e8830 0x80e8850 0x80e8870] [0x80e8830 0x80e8850 0x80e8870] [0x80e8848 0x80e8868] [0x6bbb70 0x6bbb70] 0x8832380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:42:12.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:42:13.684: INFO: rc: 1
Aug 24 04:42:13.685: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x84e2090 exit status 1   true [0x7518030 0x7518050 0x7518070] [0x7518030 0x7518050 0x7518070] [0x7518048 0x7518068] [0x6bbb70 0x6bbb70] 0x8d90240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:42:23.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:42:24.841: INFO: rc: 1
Aug 24 04:42:24.841: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x964c090 exit status 1   true [0x7a62028 0x7a62048 0x7a62068] [0x7a62028 0x7a62048 0x7a62068] [0x7a62040 0x7a62060] [0x6bbb70 0x6bbb70] 0x6cfbec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:42:34.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:42:35.934: INFO: rc: 1
Aug 24 04:42:35.934: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x964c150 exit status 1   true [0x7a620a0 0x7a620c0 0x7a620e0] [0x7a620a0 0x7a620c0 0x7a620e0] [0x7a620b8 0x7a620d8] [0x6bbb70 0x6bbb70] 0x78612c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:42:45.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:42:47.011: INFO: rc: 1
Aug 24 04:42:47.011: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x95ce090 exit status 1   true [0x7fc8028 0x7fc8048 0x7fc8068] [0x7fc8028 0x7fc8048 0x7fc8068] [0x7fc8040 0x7fc8060] [0x6bbb70 0x6bbb70] 0x7d1a600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:42:57.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:42:58.095: INFO: rc: 1
Aug 24 04:42:58.095: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x84e21e0 exit status 1   true [0x7518218 0x7518238 0x7518258] [0x7518218 0x7518238 0x7518258] [0x7518230 0x7518250] [0x6bbb70 0x6bbb70] 0x8d90480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:43:08.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:43:09.221: INFO: rc: 1
Aug 24 04:43:09.221: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x964c240 exit status 1   true [0x7a62180 0x7a621a0 0x7a621c0] [0x7a62180 0x7a621a0 0x7a621c0] [0x7a62198 0x7a621b8] [0x6bbb70 0x6bbb70] 0x7dda500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:43:19.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:43:20.331: INFO: rc: 1
Aug 24 04:43:20.332: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x84e22a0 exit status 1   true [0x7518290 0x75182d0 0x7518320] [0x7518290 0x75182d0 0x7518320] [0x75182b8 0x7518308] [0x6bbb70 0x6bbb70] 0x8d90780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:43:30.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:43:31.491: INFO: rc: 1
Aug 24 04:43:31.491: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x89e8120 exit status 1   true [0x80e8100 0x80e8120 0x80e8140] [0x80e8100 0x80e8120 0x80e8140] [0x80e8118 0x80e8138] [0x6bbb70 0x6bbb70] 0x81a2580 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:43:41.492: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:43:42.569: INFO: rc: 1
Aug 24 04:43:42.569: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x84e2360 exit status 1   true [0x7518368 0x7518388 0x75183a8] [0x7518368 0x7518388 0x75183a8] [0x7518380 0x75183a0] [0x6bbb70 0x6bbb70] 0x8d90bc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:43:52.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:43:53.648: INFO: rc: 1
Aug 24 04:43:53.648: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x95ce180 exit status 1   true [0x7fc8108 0x7fc8128 0x7fc8148] [0x7fc8108 0x7fc8128 0x7fc8148] [0x7fc8120 0x7fc8140] [0x6bbb70 0x6bbb70] 0x7d1abc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:44:03.649: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:44:04.794: INFO: rc: 1
Aug 24 04:44:04.795: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x95ce270 exit status 1   true [0x7fc8180 0x7fc81a0 0x7fc81c0] [0x7fc8180 0x7fc81a0 0x7fc81c0] [0x7fc8198 0x7fc81b8] [0x6bbb70 0x6bbb70] 0x7d1b180 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:44:14.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:44:15.939: INFO: rc: 1
Aug 24 04:44:15.940: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x95ce0c0 exit status 1   true [0x7fc8030 0x7fc8050 0x7fc8070] [0x7fc8030 0x7fc8050 0x7fc8070] [0x7fc8048 0x7fc8068] [0x6bbb70 0x6bbb70] 0x7861040 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:44:25.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:44:27.080: INFO: rc: 1
Aug 24 04:44:27.081: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x95ce1e0 exit status 1   true [0x7fc80a8 0x7fc80c8 0x7fc80e8] [0x7fc80a8 0x7fc80c8 0x7fc80e8] [0x7fc80c0 0x7fc80e0] [0x6bbb70 0x6bbb70] 0x6cfa2c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:44:37.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:44:38.253: INFO: rc: 1
Aug 24 04:44:38.253: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x964c120 exit status 1   true [0x7a620f8 0x7a62118 0x7a62138] [0x7a620f8 0x7a62118 0x7a62138] [0x7a62110 0x7a62130] [0x6bbb70 0x6bbb70] 0x7d1a600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:44:48.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:44:49.394: INFO: rc: 1
Aug 24 04:44:49.394: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x89e8090 exit status 1   true [0x80e8030 0x80e8050 0x80e8070] [0x80e8030 0x80e8050 0x80e8070] [0x80e8048 0x80e8068] [0x6bbb70 0x6bbb70] 0x7dda6c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:44:59.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:45:00.532: INFO: rc: 1
Aug 24 04:45:00.532: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x89e81b0 exit status 1   true [0x80e8118 0x80e8138 0x80e8158] [0x80e8118 0x80e8138 0x80e8158] [0x80e8130 0x80e8150] [0x6bbb70 0x6bbb70] 0x7ddafc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:45:10.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:45:11.680: INFO: rc: 1
Aug 24 04:45:11.681: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0x84e20c0 exit status 1   true [0x7518020 0x7518040 0x7518060] [0x7518020 0x7518040 0x7518060] [0x7518038 0x7518058] [0x6bbb70 0x6bbb70] 0x81a2580 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Aug 24 04:45:21.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8618 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 04:45:22.814: INFO: rc: 1
Aug 24 04:45:22.815: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Aug 24 04:45:22.815: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 24 04:45:22.841: INFO: Deleting all statefulset in ns statefulset-8618
Aug 24 04:45:22.844: INFO: Scaling statefulset ss to 0
Aug 24 04:45:22.857: INFO: Waiting for statefulset status.replicas updated to 0
Aug 24 04:45:22.862: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:45:23.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8618" for this suite.
Aug 24 04:45:29.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:45:29.325: INFO: namespace statefulset-8618 deletion completed in 6.135723298s

• [SLOW TEST:379.363 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:45:29.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-23b648e3-62fc-4021-9232-ccf3057319e0
STEP: Creating a pod to test consume secrets
Aug 24 04:45:29.613: INFO: Waiting up to 5m0s for pod "pod-secrets-63232779-c116-4cc5-94b1-c3d2099d7463" in namespace "secrets-774" to be "success or failure"
Aug 24 04:45:29.667: INFO: Pod "pod-secrets-63232779-c116-4cc5-94b1-c3d2099d7463": Phase="Pending", Reason="", readiness=false. Elapsed: 53.609335ms
Aug 24 04:45:31.673: INFO: Pod "pod-secrets-63232779-c116-4cc5-94b1-c3d2099d7463": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060002721s
Aug 24 04:45:33.681: INFO: Pod "pod-secrets-63232779-c116-4cc5-94b1-c3d2099d7463": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067605764s
Aug 24 04:45:35.689: INFO: Pod "pod-secrets-63232779-c116-4cc5-94b1-c3d2099d7463": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075601011s
STEP: Saw pod success
Aug 24 04:45:35.689: INFO: Pod "pod-secrets-63232779-c116-4cc5-94b1-c3d2099d7463" satisfied condition "success or failure"
Aug 24 04:45:35.695: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-63232779-c116-4cc5-94b1-c3d2099d7463 container secret-volume-test: 
STEP: delete the pod
Aug 24 04:45:35.749: INFO: Waiting for pod pod-secrets-63232779-c116-4cc5-94b1-c3d2099d7463 to disappear
Aug 24 04:45:35.775: INFO: Pod pod-secrets-63232779-c116-4cc5-94b1-c3d2099d7463 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:45:35.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-774" for this suite.
Aug 24 04:45:41.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:45:42.011: INFO: namespace secrets-774 deletion completed in 6.223793339s
STEP: Destroying namespace "secret-namespace-8970" for this suite.
Aug 24 04:45:48.040: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:45:48.254: INFO: namespace secret-namespace-8970 deletion completed in 6.242943594s

• [SLOW TEST:18.929 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:45:48.258: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Aug 24 04:45:48.412: INFO: Waiting up to 5m0s for pod "client-containers-a41605f7-07ac-4511-924b-640d5ff8122f" in namespace "containers-678" to be "success or failure"
Aug 24 04:45:48.434: INFO: Pod "client-containers-a41605f7-07ac-4511-924b-640d5ff8122f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.251582ms
Aug 24 04:45:50.441: INFO: Pod "client-containers-a41605f7-07ac-4511-924b-640d5ff8122f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028467206s
Aug 24 04:45:52.449: INFO: Pod "client-containers-a41605f7-07ac-4511-924b-640d5ff8122f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036269613s
STEP: Saw pod success
Aug 24 04:45:52.449: INFO: Pod "client-containers-a41605f7-07ac-4511-924b-640d5ff8122f" satisfied condition "success or failure"
Aug 24 04:45:52.736: INFO: Trying to get logs from node iruya-worker2 pod client-containers-a41605f7-07ac-4511-924b-640d5ff8122f container test-container: 
STEP: delete the pod
Aug 24 04:45:52.952: INFO: Waiting for pod client-containers-a41605f7-07ac-4511-924b-640d5ff8122f to disappear
Aug 24 04:45:52.969: INFO: Pod client-containers-a41605f7-07ac-4511-924b-640d5ff8122f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:45:52.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-678" for this suite.
Aug 24 04:45:59.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:45:59.146: INFO: namespace containers-678 deletion completed in 6.169292722s

• [SLOW TEST:10.889 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:45:59.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 04:45:59.261: INFO: Creating deployment "nginx-deployment"
Aug 24 04:45:59.270: INFO: Waiting for observed generation 1
Aug 24 04:46:01.296: INFO: Waiting for all required pods to come up
Aug 24 04:46:01.307: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Aug 24 04:46:11.326: INFO: Waiting for deployment "nginx-deployment" to complete
Aug 24 04:46:11.335: INFO: Updating deployment "nginx-deployment" with a non-existent image
Aug 24 04:46:11.345: INFO: Updating deployment nginx-deployment
Aug 24 04:46:11.345: INFO: Waiting for observed generation 2
Aug 24 04:46:13.365: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Aug 24 04:46:13.370: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Aug 24 04:46:13.374: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug 24 04:46:13.385: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Aug 24 04:46:13.386: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Aug 24 04:46:13.389: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Aug 24 04:46:13.395: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Aug 24 04:46:13.396: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Aug 24 04:46:13.403: INFO: Updating deployment nginx-deployment
Aug 24 04:46:13.403: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Aug 24 04:46:13.623: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Aug 24 04:46:16.766: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 24 04:46:18.049: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4380,SelfLink:/apis/apps/v1/namespaces/deployment-4380/deployments/nginx-deployment,UID:e5ad679d-b92f-473c-86cf-c649df32b081,ResourceVersion:2290223,Generation:3,CreationTimestamp:2020-08-24 04:45:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-08-24 04:46:13 +0000 UTC 2020-08-24 04:46:13 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-24 04:46:14 +0000 UTC 2020-08-24 04:45:59 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},}

Aug 24 04:46:18.564: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-4380,SelfLink:/apis/apps/v1/namespaces/deployment-4380/replicasets/nginx-deployment-55fb7cb77f,UID:11b46810-cd63-4ea9-b24c-f55d6dc79537,ResourceVersion:2290212,Generation:3,CreationTimestamp:2020-08-24 04:46:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e5ad679d-b92f-473c-86cf-c649df32b081 0x7f907a7 0x7f907a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 24 04:46:18.564: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Aug 24 04:46:18.565: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-4380,SelfLink:/apis/apps/v1/namespaces/deployment-4380/replicasets/nginx-deployment-7b8c6f4498,UID:5304e9bd-2ad0-455b-ac55-841f92d42828,ResourceVersion:2290205,Generation:3,CreationTimestamp:2020-08-24 04:45:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment e5ad679d-b92f-473c-86cf-c649df32b081 0x7f90877 0x7f90878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Aug 24 04:46:19.058: INFO: Pod "nginx-deployment-55fb7cb77f-2rwzg" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2rwzg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-55fb7cb77f-2rwzg,UID:dc753f3d-152e-4e48-a369-719a8fb52d2c,ResourceVersion:2290140,Generation:0,CreationTimestamp:2020-08-24 04:46:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 11b46810-cd63-4ea9-b24c-f55d6dc79537 0x8d8c307 0x8d8c308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8c3a0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8c3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-24 04:46:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.060: INFO: Pod "nginx-deployment-55fb7cb77f-hlrpw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hlrpw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-55fb7cb77f-hlrpw,UID:30eb0d2d-0429-4463-8f48-3831c213310b,ResourceVersion:2290290,Generation:0,CreationTimestamp:2020-08-24 04:46:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 11b46810-cd63-4ea9-b24c-f55d6dc79537 0x8d8c4a0 0x8d8c4a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8c520} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8c540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.218,StartTime:2020-08-24 04:46:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.061: INFO: Pod "nginx-deployment-55fb7cb77f-hpl7g" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hpl7g,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-55fb7cb77f-hpl7g,UID:6dbc9040-9652-42cb-a082-9372a5d6d878,ResourceVersion:2290273,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 11b46810-cd63-4ea9-b24c-f55d6dc79537 0x8d8c630 0x8d8c631}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8c6b0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8c6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.062: INFO: Pod "nginx-deployment-55fb7cb77f-ht8zt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ht8zt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-55fb7cb77f-ht8zt,UID:db4c725a-9c25-467f-a27b-90e9d446d685,ResourceVersion:2290142,Generation:0,CreationTimestamp:2020-08-24 04:46:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 11b46810-cd63-4ea9-b24c-f55d6dc79537 0x8d8c7a0 0x8d8c7a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8c820} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8c840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-24 04:46:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.064: INFO: Pod "nginx-deployment-55fb7cb77f-j88ss" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j88ss,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-55fb7cb77f-j88ss,UID:d889e562-7a75-46bb-af92-7ae759ea236c,ResourceVersion:2290277,Generation:0,CreationTimestamp:2020-08-24 04:46:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 11b46810-cd63-4ea9-b24c-f55d6dc79537 0x8d8c910 0x8d8c911}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8c990} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8c9b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.192,StartTime:2020-08-24 04:46:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.065: INFO: Pod "nginx-deployment-55fb7cb77f-k6lfq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-k6lfq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-55fb7cb77f-k6lfq,UID:1d26c152-1e9b-44b2-810b-2be46d5149e4,ResourceVersion:2290284,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 11b46810-cd63-4ea9-b24c-f55d6dc79537 0x8d8caa0 0x8d8caa1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8cb40} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8cb60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.067: INFO: Pod "nginx-deployment-55fb7cb77f-k74gt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-k74gt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-55fb7cb77f-k74gt,UID:22be0e47-1478-46ea-a9f6-a7b4bdf38ebb,ResourceVersion:2290226,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 11b46810-cd63-4ea9-b24c-f55d6dc79537 0x8d8cc30 0x8d8cc31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8ccb0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8ccd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.068: INFO: Pod "nginx-deployment-55fb7cb77f-lvphq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lvphq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-55fb7cb77f-lvphq,UID:819763cf-e80c-4c10-ac59-9237b0454b28,ResourceVersion:2290246,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 11b46810-cd63-4ea9-b24c-f55d6dc79537 0x8d8cda0 0x8d8cda1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8ce20} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8ce40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.070: INFO: Pod "nginx-deployment-55fb7cb77f-m7s66" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m7s66,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-55fb7cb77f-m7s66,UID:fee82ce1-2e29-41da-b716-18ae0f8356f0,ResourceVersion:2290289,Generation:0,CreationTimestamp:2020-08-24 04:46:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 11b46810-cd63-4ea9-b24c-f55d6dc79537 0x8d8cf30 0x8d8cf31}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8cfb0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8cfd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:11 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.194,StartTime:2020-08-24 04:46:11 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.071: INFO: Pod "nginx-deployment-55fb7cb77f-rrsrf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rrsrf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-55fb7cb77f-rrsrf,UID:a3881f8f-c020-466a-b3f4-812f06ee4171,ResourceVersion:2290215,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 11b46810-cd63-4ea9-b24c-f55d6dc79537 0x8d8d0c0 0x8d8d0c1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8d140} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8d160}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.073: INFO: Pod "nginx-deployment-55fb7cb77f-v2jr8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-v2jr8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-55fb7cb77f-v2jr8,UID:5dba1ade-5b0a-4986-976b-354744d30558,ResourceVersion:2290248,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 11b46810-cd63-4ea9-b24c-f55d6dc79537 0x8d8d240 0x8d8d241}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8d2c0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8d2e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.074: INFO: Pod "nginx-deployment-55fb7cb77f-xpd4v" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xpd4v,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-55fb7cb77f-xpd4v,UID:92c078b7-2ba1-420d-adb0-453ca11fb0e5,ResourceVersion:2290252,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 11b46810-cd63-4ea9-b24c-f55d6dc79537 0x8d8d3b0 0x8d8d3b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8d430} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8d450}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.075: INFO: Pod "nginx-deployment-55fb7cb77f-zvqgm" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zvqgm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-55fb7cb77f-zvqgm,UID:bf7f562d-08f5-42f3-8854-cb7d300ab2c4,ResourceVersion:2290230,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 11b46810-cd63-4ea9-b24c-f55d6dc79537 0x8d8d520 0x8d8d521}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8d5a0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8d5c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.077: INFO: Pod "nginx-deployment-7b8c6f4498-2z7mv" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2z7mv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-2z7mv,UID:c200b2b7-8be5-4b02-a022-69ab3d6f911d,ResourceVersion:2290257,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x8d8d690 0x8d8d691}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8d700} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8d720}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.078: INFO: Pod "nginx-deployment-7b8c6f4498-4zhxx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4zhxx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-4zhxx,UID:a1d9f8e8-559c-4a2e-b35c-51b733741460,ResourceVersion:2290266,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x8d8d7e7 0x8d8d7e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8d860} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8d880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.079: INFO: Pod "nginx-deployment-7b8c6f4498-84vmh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-84vmh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-84vmh,UID:27d9a580-c9a0-41fe-a2b1-66548c205b35,ResourceVersion:2290067,Generation:0,CreationTimestamp:2020-08-24 04:45:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x8d8d947 0x8d8d948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8d9c0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8d9e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.214,StartTime:2020-08-24 04:45:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-24 04:46:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4fc659fa9d081a58db36d672de2c5a9d7656c6b70543f1422958ace4b03187b3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.080: INFO: Pod "nginx-deployment-7b8c6f4498-89pph" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-89pph,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-89pph,UID:995e9855-e7be-4a1e-9376-70015bc9e257,ResourceVersion:2290233,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x8d8dab7 0x8d8dab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8db30} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8db50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.081: INFO: Pod "nginx-deployment-7b8c6f4498-8lcr5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8lcr5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-8lcr5,UID:435616cf-953c-4753-8497-c0cea3e42e9d,ResourceVersion:2290275,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x8d8dc17 0x8d8dc18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8dc90} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8dcb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.083: INFO: Pod "nginx-deployment-7b8c6f4498-fsft9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fsft9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-fsft9,UID:c39b891e-b66f-4702-af5a-73fb1190c21b,ResourceVersion:2290254,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x8d8dd77 0x8d8dd78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8ddf0} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8de10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.084: INFO: Pod "nginx-deployment-7b8c6f4498-hp8m5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hp8m5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-hp8m5,UID:f530066f-4328-4b8e-8370-05a4bc5f7a78,ResourceVersion:2290080,Generation:0,CreationTimestamp:2020-08-24 04:45:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x8d8def7 0x8d8def8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8df80} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8dfa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.216,StartTime:2020-08-24 04:45:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-24 04:46:09 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b3820c5eb67a30f5acf0959718cec1d0fd5ab32e37e4aec41ad1bb0e0dea1fd6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.085: INFO: Pod "nginx-deployment-7b8c6f4498-j9pfj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j9pfj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-j9pfj,UID:10173b41-d65b-4c2e-b302-6a8e8741dd0a,ResourceVersion:2290084,Generation:0,CreationTimestamp:2020-08-24 04:45:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x6bb2127 0x6bb2128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x6bb21c0} {node.kubernetes.io/unreachable Exists  NoExecute 0x6bb2220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:10 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:10 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.191,StartTime:2020-08-24 04:45:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-24 04:46:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://580afce953342070d5c99b8cf8c173978d520600f504e20479081bf416943de2}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.086: INFO: Pod "nginx-deployment-7b8c6f4498-kv6vx" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kv6vx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-kv6vx,UID:92c511f8-a8b4-4b81-8fd8-d872c1579e3a,ResourceVersion:2290075,Generation:0,CreationTimestamp:2020-08-24 04:45:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x939a097 0x939a098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x939a110} {node.kubernetes.io/unreachable Exists  NoExecute 0x939a130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.213,StartTime:2020-08-24 04:45:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-24 04:46:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a1c550583cd1734aefee3b0c89aceb6b6325a6e5cf505a4ad6625bf47a771e63}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.087: INFO: Pod "nginx-deployment-7b8c6f4498-lwmgt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-lwmgt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-lwmgt,UID:6e0582ba-b314-4298-a2f3-efb391bee7ea,ResourceVersion:2290211,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x939a207 0x939a208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x939a280} {node.kubernetes.io/unreachable Exists  NoExecute 0x939a2a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.089: INFO: Pod "nginx-deployment-7b8c6f4498-nqzk9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nqzk9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-nqzk9,UID:e10686a2-5bf1-4a2a-bbe3-a20c3347bbeb,ResourceVersion:2290239,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x939a367 0x939a368}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x939a3e0} {node.kubernetes.io/unreachable Exists  NoExecute 0x939a400}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.090: INFO: Pod "nginx-deployment-7b8c6f4498-qq9pj" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qq9pj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-qq9pj,UID:00be6335-d84d-44bb-9242-b4dea29ec49e,ResourceVersion:2290073,Generation:0,CreationTimestamp:2020-08-24 04:45:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x939a4c7 0x939a4c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x939a540} {node.kubernetes.io/unreachable Exists  NoExecute 0x939a560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.189,StartTime:2020-08-24 04:45:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-24 04:46:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://619f51ac86e4824d1546a0a515369bb61dd557fd3659c49726fee942a64e6489}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.092: INFO: Pod "nginx-deployment-7b8c6f4498-r9z4m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r9z4m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-r9z4m,UID:3df83cf6-4ca8-47de-a795-05b86b26927c,ResourceVersion:2290241,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x939a637 0x939a638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x939a6b0} {node.kubernetes.io/unreachable Exists  NoExecute 0x939a6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.094: INFO: Pod "nginx-deployment-7b8c6f4498-st4j5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-st4j5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-st4j5,UID:a0ac9856-81d6-4078-9307-477e1e385f0e,ResourceVersion:2290269,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x939a797 0x939a798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x939a810} {node.kubernetes.io/unreachable Exists  NoExecute 0x939a830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.095: INFO: Pod "nginx-deployment-7b8c6f4498-sxfhq" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sxfhq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-sxfhq,UID:abfd8941-90bf-4674-b557-124d788f9269,ResourceVersion:2290062,Generation:0,CreationTimestamp:2020-08-24 04:45:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x939a8f7 0x939a8f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x939a970} {node.kubernetes.io/unreachable Exists  NoExecute 0x939a990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.212,StartTime:2020-08-24 04:45:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-24 04:46:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://5273573eff4e7b181a0c1fefc598f3603a2f47f747d8f95da75f0b01c45d9235}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.097: INFO: Pod "nginx-deployment-7b8c6f4498-sxnfs" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sxnfs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-sxnfs,UID:5917edc7-0ac4-42f7-a7c8-0820ac35e9fd,ResourceVersion:2290201,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x939aa77 0x939aa78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x939aaf0} {node.kubernetes.io/unreachable Exists  NoExecute 0x939ab10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-24 04:46:13 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.098: INFO: Pod "nginx-deployment-7b8c6f4498-vrjdx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vrjdx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-vrjdx,UID:a216de94-f4c9-48e9-b315-fe835ce1e1bf,ResourceVersion:2290222,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x939abd7 0x939abd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x939ac60} {node.kubernetes.io/unreachable Exists  NoExecute 0x939ac80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.100: INFO: Pod "nginx-deployment-7b8c6f4498-vwdnj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vwdnj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-vwdnj,UID:be67ece6-3df7-4698-833a-2a179f8bbef2,ResourceVersion:2290235,Generation:0,CreationTimestamp:2020-08-24 04:46:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x939ad67 0x939ad68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x939ade0} {node.kubernetes.io/unreachable Exists  NoExecute 0x939ae00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:13 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-24 04:46:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.101: INFO: Pod "nginx-deployment-7b8c6f4498-xl9c6" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xl9c6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-xl9c6,UID:6a28dc72-784a-4f68-bf95-46b51e8eac5e,ResourceVersion:2290045,Generation:0,CreationTimestamp:2020-08-24 04:45:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x939aec7 0x939aec8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x939af40} {node.kubernetes.io/unreachable Exists  NoExecute 0x939af60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.187,StartTime:2020-08-24 04:45:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-24 04:46:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://47245d351d1fc6e9e1d7d678422d1b40d306e444200c3008ae7f723e9bc2233d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Aug 24 04:46:19.102: INFO: Pod "nginx-deployment-7b8c6f4498-zpkmh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zpkmh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4380,SelfLink:/api/v1/namespaces/deployment-4380/pods/nginx-deployment-7b8c6f4498-zpkmh,UID:b246cc4a-69d0-42b0-b994-aceb56b971bc,ResourceVersion:2290064,Generation:0,CreationTimestamp:2020-08-24 04:45:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 5304e9bd-2ad0-455b-ac55-841f92d42828 0x939b037 0x939b038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gtdqz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gtdqz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-gtdqz true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x939b0b0} {node.kubernetes.io/unreachable Exists  NoExecute 0x939b0d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:46:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:45:59 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.188,StartTime:2020-08-24 04:45:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-24 04:46:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f34781ab4eed61d4d8169a27dfbb13fd1612f12e6e9b9547265661d3e8a78b0a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:46:19.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4380" for this suite.
Aug 24 04:46:47.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:46:47.417: INFO: namespace deployment-4380 deletion completed in 28.304060173s

• [SLOW TEST:48.270 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:46:47.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:46:52.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-3651" for this suite.
Aug 24 04:47:14.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:47:14.809: INFO: namespace replication-controller-3651 deletion completed in 22.204390274s

• [SLOW TEST:27.391 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:47:14.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 04:47:14.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b6b82b0e-2ec3-4c86-9c7f-286403660540" in namespace "downward-api-3075" to be "success or failure"
Aug 24 04:47:14.928: INFO: Pod "downwardapi-volume-b6b82b0e-2ec3-4c86-9c7f-286403660540": Phase="Pending", Reason="", readiness=false. Elapsed: 4.285498ms
Aug 24 04:47:17.112: INFO: Pod "downwardapi-volume-b6b82b0e-2ec3-4c86-9c7f-286403660540": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188399393s
Aug 24 04:47:19.120: INFO: Pod "downwardapi-volume-b6b82b0e-2ec3-4c86-9c7f-286403660540": Phase="Running", Reason="", readiness=true. Elapsed: 4.195795591s
Aug 24 04:47:21.149: INFO: Pod "downwardapi-volume-b6b82b0e-2ec3-4c86-9c7f-286403660540": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.225497807s
STEP: Saw pod success
Aug 24 04:47:21.150: INFO: Pod "downwardapi-volume-b6b82b0e-2ec3-4c86-9c7f-286403660540" satisfied condition "success or failure"
Aug 24 04:47:21.156: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b6b82b0e-2ec3-4c86-9c7f-286403660540 container client-container: 
STEP: delete the pod
Aug 24 04:47:21.202: INFO: Waiting for pod downwardapi-volume-b6b82b0e-2ec3-4c86-9c7f-286403660540 to disappear
Aug 24 04:47:21.218: INFO: Pod downwardapi-volume-b6b82b0e-2ec3-4c86-9c7f-286403660540 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:47:21.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3075" for this suite.
Aug 24 04:47:27.246: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:47:27.391: INFO: namespace downward-api-3075 deletion completed in 6.160079112s

• [SLOW TEST:12.579 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:47:27.392: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 24 04:47:32.090: INFO: Successfully updated pod "labelsupdate89df8d32-6680-434e-b96a-31f5534fb5bf"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:47:36.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5961" for this suite.
Aug 24 04:47:58.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:47:58.368: INFO: namespace projected-5961 deletion completed in 22.176425964s

• [SLOW TEST:30.977 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run rc 
  should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:47:58.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456
[It] should create an rc from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 24 04:47:58.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-7567'
Aug 24 04:47:59.752: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 24 04:47:59.753: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created
STEP: confirm that you can get logs from an rc
Aug 24 04:47:59.807: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-f7q99]
Aug 24 04:47:59.807: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-f7q99" in namespace "kubectl-7567" to be "running and ready"
Aug 24 04:47:59.824: INFO: Pod "e2e-test-nginx-rc-f7q99": Phase="Pending", Reason="", readiness=false. Elapsed: 16.697233ms
Aug 24 04:48:01.831: INFO: Pod "e2e-test-nginx-rc-f7q99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024016382s
Aug 24 04:48:03.837: INFO: Pod "e2e-test-nginx-rc-f7q99": Phase="Running", Reason="", readiness=true. Elapsed: 4.029858485s
Aug 24 04:48:03.837: INFO: Pod "e2e-test-nginx-rc-f7q99" satisfied condition "running and ready"
Aug 24 04:48:03.837: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-f7q99]
Aug 24 04:48:03.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-7567'
Aug 24 04:48:05.026: INFO: stderr: ""
Aug 24 04:48:05.026: INFO: stdout: ""
[AfterEach] [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461
Aug 24 04:48:05.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-7567'
Aug 24 04:48:06.158: INFO: stderr: ""
Aug 24 04:48:06.159: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:48:06.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7567" for this suite.
Aug 24 04:48:12.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:48:12.361: INFO: namespace kubectl-7567 deletion completed in 6.193134523s

• [SLOW TEST:13.991 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run rc
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:48:12.366: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-a2bc4afe-eccc-46f2-9199-73b3c12b1270
STEP: Creating a pod to test consume configMaps
Aug 24 04:48:12.467: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e4d8eb97-61a4-4e5d-815c-941d31c9b240" in namespace "projected-1471" to be "success or failure"
Aug 24 04:48:12.529: INFO: Pod "pod-projected-configmaps-e4d8eb97-61a4-4e5d-815c-941d31c9b240": Phase="Pending", Reason="", readiness=false. Elapsed: 61.273375ms
Aug 24 04:48:14.537: INFO: Pod "pod-projected-configmaps-e4d8eb97-61a4-4e5d-815c-941d31c9b240": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069223248s
Aug 24 04:48:16.544: INFO: Pod "pod-projected-configmaps-e4d8eb97-61a4-4e5d-815c-941d31c9b240": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077144096s
STEP: Saw pod success
Aug 24 04:48:16.545: INFO: Pod "pod-projected-configmaps-e4d8eb97-61a4-4e5d-815c-941d31c9b240" satisfied condition "success or failure"
Aug 24 04:48:16.553: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-e4d8eb97-61a4-4e5d-815c-941d31c9b240 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 24 04:48:16.580: INFO: Waiting for pod pod-projected-configmaps-e4d8eb97-61a4-4e5d-815c-941d31c9b240 to disappear
Aug 24 04:48:16.585: INFO: Pod pod-projected-configmaps-e4d8eb97-61a4-4e5d-815c-941d31c9b240 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:48:16.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1471" for this suite.
Aug 24 04:48:22.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:48:22.769: INFO: namespace projected-1471 deletion completed in 6.174365784s

• [SLOW TEST:10.404 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:48:22.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 24 04:48:22.881: INFO: Waiting up to 5m0s for pod "pod-3380dac6-16eb-4b8f-ab3a-2b27e02341e1" in namespace "emptydir-8137" to be "success or failure"
Aug 24 04:48:22.921: INFO: Pod "pod-3380dac6-16eb-4b8f-ab3a-2b27e02341e1": Phase="Pending", Reason="", readiness=false. Elapsed: 39.866329ms
Aug 24 04:48:24.927: INFO: Pod "pod-3380dac6-16eb-4b8f-ab3a-2b27e02341e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045804924s
Aug 24 04:48:26.934: INFO: Pod "pod-3380dac6-16eb-4b8f-ab3a-2b27e02341e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053453534s
STEP: Saw pod success
Aug 24 04:48:26.935: INFO: Pod "pod-3380dac6-16eb-4b8f-ab3a-2b27e02341e1" satisfied condition "success or failure"
Aug 24 04:48:26.940: INFO: Trying to get logs from node iruya-worker pod pod-3380dac6-16eb-4b8f-ab3a-2b27e02341e1 container test-container: 
STEP: delete the pod
Aug 24 04:48:26.982: INFO: Waiting for pod pod-3380dac6-16eb-4b8f-ab3a-2b27e02341e1 to disappear
Aug 24 04:48:26.997: INFO: Pod pod-3380dac6-16eb-4b8f-ab3a-2b27e02341e1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:48:26.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8137" for this suite.
Aug 24 04:48:33.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:48:33.218: INFO: namespace emptydir-8137 deletion completed in 6.21245212s

• [SLOW TEST:10.447 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:48:33.221: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Aug 24 04:48:33.300: INFO: Waiting up to 5m0s for pod "pod-73bfd625-c968-4c0b-9a59-1a8c048027e4" in namespace "emptydir-865" to be "success or failure"
Aug 24 04:48:33.342: INFO: Pod "pod-73bfd625-c968-4c0b-9a59-1a8c048027e4": Phase="Pending", Reason="", readiness=false. Elapsed: 41.218913ms
Aug 24 04:48:35.348: INFO: Pod "pod-73bfd625-c968-4c0b-9a59-1a8c048027e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047717344s
Aug 24 04:48:37.355: INFO: Pod "pod-73bfd625-c968-4c0b-9a59-1a8c048027e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054595602s
STEP: Saw pod success
Aug 24 04:48:37.355: INFO: Pod "pod-73bfd625-c968-4c0b-9a59-1a8c048027e4" satisfied condition "success or failure"
Aug 24 04:48:37.359: INFO: Trying to get logs from node iruya-worker pod pod-73bfd625-c968-4c0b-9a59-1a8c048027e4 container test-container: 
STEP: delete the pod
Aug 24 04:48:37.405: INFO: Waiting for pod pod-73bfd625-c968-4c0b-9a59-1a8c048027e4 to disappear
Aug 24 04:48:37.430: INFO: Pod pod-73bfd625-c968-4c0b-9a59-1a8c048027e4 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:48:37.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-865" for this suite.
Aug 24 04:48:43.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:48:43.652: INFO: namespace emptydir-865 deletion completed in 6.210982247s

• [SLOW TEST:10.432 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:48:43.654: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 04:48:43.781: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10f008a0-5768-47c8-9ed5-eab1a667f69d" in namespace "downward-api-7406" to be "success or failure"
Aug 24 04:48:43.840: INFO: Pod "downwardapi-volume-10f008a0-5768-47c8-9ed5-eab1a667f69d": Phase="Pending", Reason="", readiness=false. Elapsed: 59.630716ms
Aug 24 04:48:45.955: INFO: Pod "downwardapi-volume-10f008a0-5768-47c8-9ed5-eab1a667f69d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174077168s
Aug 24 04:48:47.964: INFO: Pod "downwardapi-volume-10f008a0-5768-47c8-9ed5-eab1a667f69d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.182725037s
STEP: Saw pod success
Aug 24 04:48:47.964: INFO: Pod "downwardapi-volume-10f008a0-5768-47c8-9ed5-eab1a667f69d" satisfied condition "success or failure"
Aug 24 04:48:47.970: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-10f008a0-5768-47c8-9ed5-eab1a667f69d container client-container: 
STEP: delete the pod
Aug 24 04:48:48.062: INFO: Waiting for pod downwardapi-volume-10f008a0-5768-47c8-9ed5-eab1a667f69d to disappear
Aug 24 04:48:48.076: INFO: Pod downwardapi-volume-10f008a0-5768-47c8-9ed5-eab1a667f69d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:48:48.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7406" for this suite.
Aug 24 04:48:54.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:48:54.243: INFO: namespace downward-api-7406 deletion completed in 6.156499997s

• [SLOW TEST:10.590 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:48:54.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 24 04:48:58.522: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:48:58.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5109" for this suite.
Aug 24 04:49:04.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:49:04.999: INFO: namespace container-runtime-5109 deletion completed in 6.254060053s

• [SLOW TEST:10.754 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:49:05.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 04:49:05.070: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Aug 24 04:49:10.077: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 24 04:49:10.078: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 24 04:49:14.158: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-4232,SelfLink:/apis/apps/v1/namespaces/deployment-4232/deployments/test-cleanup-deployment,UID:bc20b49f-10ec-475e-8d21-bde33f4316bf,ResourceVersion:2291207,Generation:1,CreationTimestamp:2020-08-24 04:49:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-24 04:49:10 +0000 UTC 2020-08-24 04:49:10 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-24 04:49:13 +0000 UTC 2020-08-24 04:49:10 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 24 04:49:14.165: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-4232,SelfLink:/apis/apps/v1/namespaces/deployment-4232/replicasets/test-cleanup-deployment-55bbcbc84c,UID:3f0d246b-705c-4460-848a-d8fb0607247d,ResourceVersion:2291195,Generation:1,CreationTimestamp:2020-08-24 04:49:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment bc20b49f-10ec-475e-8d21-bde33f4316bf 0x90f4877 0x90f4878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 24 04:49:14.173: INFO: Pod "test-cleanup-deployment-55bbcbc84c-74dmk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-74dmk,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-4232,SelfLink:/api/v1/namespaces/deployment-4232/pods/test-cleanup-deployment-55bbcbc84c-74dmk,UID:c7a0a01d-9119-4127-93c5-d90e238c68c9,ResourceVersion:2291194,Generation:0,CreationTimestamp:2020-08-24 04:49:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 3f0d246b-705c-4460-848a-d8fb0607247d 0x90f4eb7 0x90f4eb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-d8984 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-d8984,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-d8984 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x90f4f30} {node.kubernetes.io/unreachable Exists  NoExecute 0x90f4f50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:49:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:49:13 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:49:13 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 04:49:10 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.243,StartTime:2020-08-24 04:49:10 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-24 04:49:13 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://aa8f5b738d931570bc8b351ca7594622ff45ed835abfb98a6dc0e43dec5dff98}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:49:14.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4232" for this suite.
Aug 24 04:49:20.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:49:20.428: INFO: namespace deployment-4232 deletion completed in 6.244109225s

• [SLOW TEST:15.423 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:49:20.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Aug 24 04:49:20.606: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix239840877/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:49:21.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5920" for this suite.
Aug 24 04:49:27.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:49:27.734: INFO: namespace kubectl-5920 deletion completed in 6.190091176s

• [SLOW TEST:7.300 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:49:27.737: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 24 04:49:27.807: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:49:33.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-966" for this suite.
Aug 24 04:49:39.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:49:39.752: INFO: namespace init-container-966 deletion completed in 6.180838967s

• [SLOW TEST:12.015 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:49:39.755: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 24 04:49:39.823: INFO: Waiting up to 5m0s for pod "pod-c5bee2da-5857-4c87-acca-7b4c7382f384" in namespace "emptydir-7002" to be "success or failure"
Aug 24 04:49:39.844: INFO: Pod "pod-c5bee2da-5857-4c87-acca-7b4c7382f384": Phase="Pending", Reason="", readiness=false. Elapsed: 20.641857ms
Aug 24 04:49:42.080: INFO: Pod "pod-c5bee2da-5857-4c87-acca-7b4c7382f384": Phase="Pending", Reason="", readiness=false. Elapsed: 2.256113323s
Aug 24 04:49:44.086: INFO: Pod "pod-c5bee2da-5857-4c87-acca-7b4c7382f384": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.262902468s
STEP: Saw pod success
Aug 24 04:49:44.086: INFO: Pod "pod-c5bee2da-5857-4c87-acca-7b4c7382f384" satisfied condition "success or failure"
Aug 24 04:49:44.090: INFO: Trying to get logs from node iruya-worker2 pod pod-c5bee2da-5857-4c87-acca-7b4c7382f384 container test-container: 
STEP: delete the pod
Aug 24 04:49:44.132: INFO: Waiting for pod pod-c5bee2da-5857-4c87-acca-7b4c7382f384 to disappear
Aug 24 04:49:44.137: INFO: Pod pod-c5bee2da-5857-4c87-acca-7b4c7382f384 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:49:44.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7002" for this suite.
Aug 24 04:49:50.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:49:50.309: INFO: namespace emptydir-7002 deletion completed in 6.160821262s

• [SLOW TEST:10.554 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:49:50.310: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-080c4561-5e1b-4ef8-9fe6-aec8a501e9ce
STEP: Creating a pod to test consume configMaps
Aug 24 04:49:50.431: INFO: Waiting up to 5m0s for pod "pod-configmaps-b690bfc1-dbd1-4b7c-ad15-2b68abe1dac0" in namespace "configmap-5608" to be "success or failure"
Aug 24 04:49:50.456: INFO: Pod "pod-configmaps-b690bfc1-dbd1-4b7c-ad15-2b68abe1dac0": Phase="Pending", Reason="", readiness=false. Elapsed: 24.814203ms
Aug 24 04:49:52.541: INFO: Pod "pod-configmaps-b690bfc1-dbd1-4b7c-ad15-2b68abe1dac0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110259813s
Aug 24 04:49:54.550: INFO: Pod "pod-configmaps-b690bfc1-dbd1-4b7c-ad15-2b68abe1dac0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119095827s
STEP: Saw pod success
Aug 24 04:49:54.550: INFO: Pod "pod-configmaps-b690bfc1-dbd1-4b7c-ad15-2b68abe1dac0" satisfied condition "success or failure"
Aug 24 04:49:54.556: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b690bfc1-dbd1-4b7c-ad15-2b68abe1dac0 container configmap-volume-test: 
STEP: delete the pod
Aug 24 04:49:54.595: INFO: Waiting for pod pod-configmaps-b690bfc1-dbd1-4b7c-ad15-2b68abe1dac0 to disappear
Aug 24 04:49:54.633: INFO: Pod pod-configmaps-b690bfc1-dbd1-4b7c-ad15-2b68abe1dac0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:49:54.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5608" for this suite.
Aug 24 04:50:00.671: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:50:00.817: INFO: namespace configmap-5608 deletion completed in 6.172962075s

• [SLOW TEST:10.507 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:50:00.819: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-15ce3c32-9cd3-4b13-b6c0-a61241fea1d7
STEP: Creating a pod to test consume secrets
Aug 24 04:50:00.925: INFO: Waiting up to 5m0s for pod "pod-secrets-08a78c78-ec80-4b91-88f2-a57670fbf7db" in namespace "secrets-1055" to be "success or failure"
Aug 24 04:50:00.960: INFO: Pod "pod-secrets-08a78c78-ec80-4b91-88f2-a57670fbf7db": Phase="Pending", Reason="", readiness=false. Elapsed: 34.035699ms
Aug 24 04:50:02.966: INFO: Pod "pod-secrets-08a78c78-ec80-4b91-88f2-a57670fbf7db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040289683s
Aug 24 04:50:04.972: INFO: Pod "pod-secrets-08a78c78-ec80-4b91-88f2-a57670fbf7db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045988104s
STEP: Saw pod success
Aug 24 04:50:04.972: INFO: Pod "pod-secrets-08a78c78-ec80-4b91-88f2-a57670fbf7db" satisfied condition "success or failure"
Aug 24 04:50:04.975: INFO: Trying to get logs from node iruya-worker pod pod-secrets-08a78c78-ec80-4b91-88f2-a57670fbf7db container secret-volume-test: 
STEP: delete the pod
Aug 24 04:50:04.997: INFO: Waiting for pod pod-secrets-08a78c78-ec80-4b91-88f2-a57670fbf7db to disappear
Aug 24 04:50:05.001: INFO: Pod pod-secrets-08a78c78-ec80-4b91-88f2-a57670fbf7db no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:50:05.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1055" for this suite.
Aug 24 04:50:11.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:50:11.190: INFO: namespace secrets-1055 deletion completed in 6.177334008s

• [SLOW TEST:10.371 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:50:11.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-2ccc93c5-4f2a-47f0-b025-3fee33bf0a68
STEP: Creating a pod to test consume secrets
Aug 24 04:50:11.394: INFO: Waiting up to 5m0s for pod "pod-secrets-e5c401fe-f8bd-427f-a6ac-0d31805bab65" in namespace "secrets-8593" to be "success or failure"
Aug 24 04:50:11.415: INFO: Pod "pod-secrets-e5c401fe-f8bd-427f-a6ac-0d31805bab65": Phase="Pending", Reason="", readiness=false. Elapsed: 20.955806ms
Aug 24 04:50:13.422: INFO: Pod "pod-secrets-e5c401fe-f8bd-427f-a6ac-0d31805bab65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02745741s
Aug 24 04:50:15.432: INFO: Pod "pod-secrets-e5c401fe-f8bd-427f-a6ac-0d31805bab65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038223056s
STEP: Saw pod success
Aug 24 04:50:15.433: INFO: Pod "pod-secrets-e5c401fe-f8bd-427f-a6ac-0d31805bab65" satisfied condition "success or failure"
Aug 24 04:50:15.437: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-e5c401fe-f8bd-427f-a6ac-0d31805bab65 container secret-volume-test: 
STEP: delete the pod
Aug 24 04:50:15.463: INFO: Waiting for pod pod-secrets-e5c401fe-f8bd-427f-a6ac-0d31805bab65 to disappear
Aug 24 04:50:15.467: INFO: Pod pod-secrets-e5c401fe-f8bd-427f-a6ac-0d31805bab65 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:50:15.467: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8593" for this suite.
Aug 24 04:50:21.492: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:50:21.637: INFO: namespace secrets-8593 deletion completed in 6.159614525s

• [SLOW TEST:10.444 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:50:21.641: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Aug 24 04:50:22.282: INFO: created pod pod-service-account-defaultsa
Aug 24 04:50:22.283: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Aug 24 04:50:22.288: INFO: created pod pod-service-account-mountsa
Aug 24 04:50:22.288: INFO: pod pod-service-account-mountsa service account token volume mount: true
Aug 24 04:50:22.324: INFO: created pod pod-service-account-nomountsa
Aug 24 04:50:22.324: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Aug 24 04:50:22.379: INFO: created pod pod-service-account-defaultsa-mountspec
Aug 24 04:50:22.379: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Aug 24 04:50:22.396: INFO: created pod pod-service-account-mountsa-mountspec
Aug 24 04:50:22.396: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Aug 24 04:50:22.450: INFO: created pod pod-service-account-nomountsa-mountspec
Aug 24 04:50:22.450: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Aug 24 04:50:22.511: INFO: created pod pod-service-account-defaultsa-nomountspec
Aug 24 04:50:22.511: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Aug 24 04:50:22.552: INFO: created pod pod-service-account-mountsa-nomountspec
Aug 24 04:50:22.553: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Aug 24 04:50:22.603: INFO: created pod pod-service-account-nomountsa-nomountspec
Aug 24 04:50:22.603: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:50:22.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-8266" for this suite.
Aug 24 04:50:52.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:50:52.909: INFO: namespace svcaccounts-8266 deletion completed in 30.227431042s

• [SLOW TEST:31.268 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:50:52.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Aug 24 04:50:52.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1171'
Aug 24 04:50:57.258: INFO: stderr: ""
Aug 24 04:50:57.258: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 24 04:50:58.267: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 04:50:58.268: INFO: Found 0 / 1
Aug 24 04:50:59.267: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 04:50:59.268: INFO: Found 0 / 1
Aug 24 04:51:00.297: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 04:51:00.297: INFO: Found 0 / 1
Aug 24 04:51:01.265: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 04:51:01.266: INFO: Found 1 / 1
Aug 24 04:51:01.266: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Aug 24 04:51:01.290: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 04:51:01.290: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 24 04:51:01.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-hn2p4 --namespace=kubectl-1171 -p {"metadata":{"annotations":{"x":"y"}}}'
Aug 24 04:51:02.423: INFO: stderr: ""
Aug 24 04:51:02.423: INFO: stdout: "pod/redis-master-hn2p4 patched\n"
STEP: checking annotations
Aug 24 04:51:02.428: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 04:51:02.428: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:51:02.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1171" for this suite.
Aug 24 04:51:24.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:51:24.607: INFO: namespace kubectl-1171 deletion completed in 22.152320854s

• [SLOW TEST:31.698 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:51:24.610: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-5b8b719b-b466-4045-ae6b-c563963c7132
STEP: Creating a pod to test consume configMaps
Aug 24 04:51:24.701: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8395485a-b95e-407f-a6e6-dbc31f296db7" in namespace "projected-6702" to be "success or failure"
Aug 24 04:51:24.718: INFO: Pod "pod-projected-configmaps-8395485a-b95e-407f-a6e6-dbc31f296db7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.009659ms
Aug 24 04:51:26.725: INFO: Pod "pod-projected-configmaps-8395485a-b95e-407f-a6e6-dbc31f296db7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023347798s
Aug 24 04:51:28.733: INFO: Pod "pod-projected-configmaps-8395485a-b95e-407f-a6e6-dbc31f296db7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030846681s
STEP: Saw pod success
Aug 24 04:51:28.733: INFO: Pod "pod-projected-configmaps-8395485a-b95e-407f-a6e6-dbc31f296db7" satisfied condition "success or failure"
Aug 24 04:51:28.738: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-8395485a-b95e-407f-a6e6-dbc31f296db7 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 24 04:51:28.818: INFO: Waiting for pod pod-projected-configmaps-8395485a-b95e-407f-a6e6-dbc31f296db7 to disappear
Aug 24 04:51:28.834: INFO: Pod pod-projected-configmaps-8395485a-b95e-407f-a6e6-dbc31f296db7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:51:28.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6702" for this suite.
Aug 24 04:51:34.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:51:35.048: INFO: namespace projected-6702 deletion completed in 6.203535377s

• [SLOW TEST:10.439 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:51:35.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-789faced-0643-46c6-a7a1-bcfbae9268db
STEP: Creating a pod to test consume configMaps
Aug 24 04:51:35.173: INFO: Waiting up to 5m0s for pod "pod-configmaps-b2a15a29-5973-4408-9d88-896e54b52f12" in namespace "configmap-9147" to be "success or failure"
Aug 24 04:51:35.195: INFO: Pod "pod-configmaps-b2a15a29-5973-4408-9d88-896e54b52f12": Phase="Pending", Reason="", readiness=false. Elapsed: 21.691437ms
Aug 24 04:51:37.203: INFO: Pod "pod-configmaps-b2a15a29-5973-4408-9d88-896e54b52f12": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029981878s
Aug 24 04:51:39.211: INFO: Pod "pod-configmaps-b2a15a29-5973-4408-9d88-896e54b52f12": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037534896s
STEP: Saw pod success
Aug 24 04:51:39.211: INFO: Pod "pod-configmaps-b2a15a29-5973-4408-9d88-896e54b52f12" satisfied condition "success or failure"
Aug 24 04:51:39.217: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b2a15a29-5973-4408-9d88-896e54b52f12 container configmap-volume-test: 
STEP: delete the pod
Aug 24 04:51:39.265: INFO: Waiting for pod pod-configmaps-b2a15a29-5973-4408-9d88-896e54b52f12 to disappear
Aug 24 04:51:39.277: INFO: Pod pod-configmaps-b2a15a29-5973-4408-9d88-896e54b52f12 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:51:39.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9147" for this suite.
Aug 24 04:51:45.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:51:45.446: INFO: namespace configmap-9147 deletion completed in 6.157234508s

• [SLOW TEST:10.396 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:51:45.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 24 04:51:45.571: INFO: Waiting up to 5m0s for pod "downward-api-0af0cd05-79ad-4ab4-830c-fb293b9e76be" in namespace "downward-api-1455" to be "success or failure"
Aug 24 04:51:45.584: INFO: Pod "downward-api-0af0cd05-79ad-4ab4-830c-fb293b9e76be": Phase="Pending", Reason="", readiness=false. Elapsed: 12.89119ms
Aug 24 04:51:48.285: INFO: Pod "downward-api-0af0cd05-79ad-4ab4-830c-fb293b9e76be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.713629639s
Aug 24 04:51:50.293: INFO: Pod "downward-api-0af0cd05-79ad-4ab4-830c-fb293b9e76be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.721268996s
Aug 24 04:51:52.301: INFO: Pod "downward-api-0af0cd05-79ad-4ab4-830c-fb293b9e76be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.729842423s
STEP: Saw pod success
Aug 24 04:51:52.302: INFO: Pod "downward-api-0af0cd05-79ad-4ab4-830c-fb293b9e76be" satisfied condition "success or failure"
Aug 24 04:51:52.332: INFO: Trying to get logs from node iruya-worker2 pod downward-api-0af0cd05-79ad-4ab4-830c-fb293b9e76be container dapi-container: 
STEP: delete the pod
Aug 24 04:51:52.372: INFO: Waiting for pod downward-api-0af0cd05-79ad-4ab4-830c-fb293b9e76be to disappear
Aug 24 04:51:52.381: INFO: Pod downward-api-0af0cd05-79ad-4ab4-830c-fb293b9e76be no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:51:52.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1455" for this suite.
Aug 24 04:51:58.405: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:51:58.536: INFO: namespace downward-api-1455 deletion completed in 6.147350101s

• [SLOW TEST:13.090 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:51:58.540: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 24 04:51:58.624: INFO: Waiting up to 5m0s for pod "downward-api-4eb06d27-a40e-4c8b-8ff9-6859a370ed6b" in namespace "downward-api-8420" to be "success or failure"
Aug 24 04:51:58.657: INFO: Pod "downward-api-4eb06d27-a40e-4c8b-8ff9-6859a370ed6b": Phase="Pending", Reason="", readiness=false. Elapsed: 32.92312ms
Aug 24 04:52:00.664: INFO: Pod "downward-api-4eb06d27-a40e-4c8b-8ff9-6859a370ed6b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040610493s
Aug 24 04:52:02.672: INFO: Pod "downward-api-4eb06d27-a40e-4c8b-8ff9-6859a370ed6b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048105868s
STEP: Saw pod success
Aug 24 04:52:02.672: INFO: Pod "downward-api-4eb06d27-a40e-4c8b-8ff9-6859a370ed6b" satisfied condition "success or failure"
Aug 24 04:52:02.677: INFO: Trying to get logs from node iruya-worker pod downward-api-4eb06d27-a40e-4c8b-8ff9-6859a370ed6b container dapi-container: 
STEP: delete the pod
Aug 24 04:52:02.726: INFO: Waiting for pod downward-api-4eb06d27-a40e-4c8b-8ff9-6859a370ed6b to disappear
Aug 24 04:52:02.741: INFO: Pod downward-api-4eb06d27-a40e-4c8b-8ff9-6859a370ed6b no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:52:02.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8420" for this suite.
Aug 24 04:52:08.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:52:08.909: INFO: namespace downward-api-8420 deletion completed in 6.159902088s

• [SLOW TEST:10.369 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:52:08.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-m49s
STEP: Creating a pod to test atomic-volume-subpath
Aug 24 04:52:09.007: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-m49s" in namespace "subpath-706" to be "success or failure"
Aug 24 04:52:09.017: INFO: Pod "pod-subpath-test-projected-m49s": Phase="Pending", Reason="", readiness=false. Elapsed: 10.188714ms
Aug 24 04:52:11.024: INFO: Pod "pod-subpath-test-projected-m49s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017447093s
Aug 24 04:52:13.032: INFO: Pod "pod-subpath-test-projected-m49s": Phase="Running", Reason="", readiness=true. Elapsed: 4.025191801s
Aug 24 04:52:15.041: INFO: Pod "pod-subpath-test-projected-m49s": Phase="Running", Reason="", readiness=true. Elapsed: 6.033558332s
Aug 24 04:52:17.048: INFO: Pod "pod-subpath-test-projected-m49s": Phase="Running", Reason="", readiness=true. Elapsed: 8.041404466s
Aug 24 04:52:19.056: INFO: Pod "pod-subpath-test-projected-m49s": Phase="Running", Reason="", readiness=true. Elapsed: 10.049060015s
Aug 24 04:52:21.064: INFO: Pod "pod-subpath-test-projected-m49s": Phase="Running", Reason="", readiness=true. Elapsed: 12.056827164s
Aug 24 04:52:23.070: INFO: Pod "pod-subpath-test-projected-m49s": Phase="Running", Reason="", readiness=true. Elapsed: 14.06343463s
Aug 24 04:52:25.077: INFO: Pod "pod-subpath-test-projected-m49s": Phase="Running", Reason="", readiness=true. Elapsed: 16.069881975s
Aug 24 04:52:27.085: INFO: Pod "pod-subpath-test-projected-m49s": Phase="Running", Reason="", readiness=true. Elapsed: 18.078265628s
Aug 24 04:52:29.093: INFO: Pod "pod-subpath-test-projected-m49s": Phase="Running", Reason="", readiness=true. Elapsed: 20.086046683s
Aug 24 04:52:31.101: INFO: Pod "pod-subpath-test-projected-m49s": Phase="Running", Reason="", readiness=true. Elapsed: 22.093781113s
Aug 24 04:52:33.109: INFO: Pod "pod-subpath-test-projected-m49s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.101895261s
STEP: Saw pod success
Aug 24 04:52:33.109: INFO: Pod "pod-subpath-test-projected-m49s" satisfied condition "success or failure"
Aug 24 04:52:33.115: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-m49s container test-container-subpath-projected-m49s: 
STEP: delete the pod
Aug 24 04:52:33.375: INFO: Waiting for pod pod-subpath-test-projected-m49s to disappear
Aug 24 04:52:33.405: INFO: Pod pod-subpath-test-projected-m49s no longer exists
STEP: Deleting pod pod-subpath-test-projected-m49s
Aug 24 04:52:33.405: INFO: Deleting pod "pod-subpath-test-projected-m49s" in namespace "subpath-706"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:52:33.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-706" for this suite.
Aug 24 04:52:39.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:52:39.685: INFO: namespace subpath-706 deletion completed in 6.267831513s

• [SLOW TEST:30.775 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:52:39.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Aug 24 04:52:39.763: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Aug 24 04:52:39.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-472'
Aug 24 04:52:41.353: INFO: stderr: ""
Aug 24 04:52:41.353: INFO: stdout: "service/redis-slave created\n"
Aug 24 04:52:41.354: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Aug 24 04:52:41.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-472'
Aug 24 04:52:42.987: INFO: stderr: ""
Aug 24 04:52:42.987: INFO: stdout: "service/redis-master created\n"
Aug 24 04:52:42.988: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 24 04:52:42.988: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-472'
Aug 24 04:52:44.580: INFO: stderr: ""
Aug 24 04:52:44.580: INFO: stdout: "service/frontend created\n"
Aug 24 04:52:44.582: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Aug 24 04:52:44.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-472'
Aug 24 04:52:46.139: INFO: stderr: ""
Aug 24 04:52:46.139: INFO: stdout: "deployment.apps/frontend created\n"
Aug 24 04:52:46.142: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 24 04:52:46.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-472'
Aug 24 04:52:47.685: INFO: stderr: ""
Aug 24 04:52:47.685: INFO: stdout: "deployment.apps/redis-master created\n"
Aug 24 04:52:47.686: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Aug 24 04:52:47.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-472'
Aug 24 04:52:50.820: INFO: stderr: ""
Aug 24 04:52:50.820: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Aug 24 04:52:50.820: INFO: Waiting for all frontend pods to be Running.
Aug 24 04:52:55.873: INFO: Waiting for frontend to serve content.
Aug 24 04:52:57.073: INFO: Trying to add a new entry to the guestbook.
Aug 24 04:52:58.161: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 24 04:52:58.183: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-472'
Aug 24 04:52:59.366: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 24 04:52:59.366: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 24 04:52:59.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-472'
Aug 24 04:53:00.539: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 24 04:53:00.539: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 24 04:53:00.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-472'
Aug 24 04:53:01.702: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 24 04:53:01.702: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 24 04:53:01.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-472'
Aug 24 04:53:02.832: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 24 04:53:02.833: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 24 04:53:02.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-472'
Aug 24 04:53:04.292: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 24 04:53:04.292: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 24 04:53:04.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-472'
Aug 24 04:53:05.615: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 24 04:53:05.615: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:53:05.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-472" for this suite.
Aug 24 04:53:45.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:53:45.866: INFO: namespace kubectl-472 deletion completed in 40.227755866s

• [SLOW TEST:66.175 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:53:45.868: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 04:53:45.978: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9edfcca4-8e16-477d-934b-77d13609321f" in namespace "projected-2446" to be "success or failure"
Aug 24 04:53:46.032: INFO: Pod "downwardapi-volume-9edfcca4-8e16-477d-934b-77d13609321f": Phase="Pending", Reason="", readiness=false. Elapsed: 53.363507ms
Aug 24 04:53:48.089: INFO: Pod "downwardapi-volume-9edfcca4-8e16-477d-934b-77d13609321f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.110881392s
Aug 24 04:53:50.131: INFO: Pod "downwardapi-volume-9edfcca4-8e16-477d-934b-77d13609321f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152622355s
STEP: Saw pod success
Aug 24 04:53:50.131: INFO: Pod "downwardapi-volume-9edfcca4-8e16-477d-934b-77d13609321f" satisfied condition "success or failure"
Aug 24 04:53:50.154: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9edfcca4-8e16-477d-934b-77d13609321f container client-container: 
STEP: delete the pod
Aug 24 04:53:50.225: INFO: Waiting for pod downwardapi-volume-9edfcca4-8e16-477d-934b-77d13609321f to disappear
Aug 24 04:53:50.291: INFO: Pod downwardapi-volume-9edfcca4-8e16-477d-934b-77d13609321f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:53:50.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2446" for this suite.
Aug 24 04:53:56.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:53:56.494: INFO: namespace projected-2446 deletion completed in 6.191133098s

• [SLOW TEST:10.626 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:53:56.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9448.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9448.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9448.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9448.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9448.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9448.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 24 04:54:02.739: INFO: DNS probes using dns-9448/dns-test-5e0cae20-58f5-437d-a14c-963b8eb86c49 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:54:02.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9448" for this suite.
Aug 24 04:54:08.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:54:08.997: INFO: namespace dns-9448 deletion completed in 6.198017459s

• [SLOW TEST:12.502 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:54:09.000: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:55:09.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-71" for this suite.
Aug 24 04:55:31.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:55:31.307: INFO: namespace container-probe-71 deletion completed in 22.172633279s

• [SLOW TEST:82.307 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:55:31.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 24 04:55:31.445: INFO: Waiting up to 5m0s for pod "pod-72e9cbe1-a50e-497f-816d-eb0f23780851" in namespace "emptydir-3916" to be "success or failure"
Aug 24 04:55:31.468: INFO: Pod "pod-72e9cbe1-a50e-497f-816d-eb0f23780851": Phase="Pending", Reason="", readiness=false. Elapsed: 22.160418ms
Aug 24 04:55:33.474: INFO: Pod "pod-72e9cbe1-a50e-497f-816d-eb0f23780851": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028986406s
Aug 24 04:55:35.481: INFO: Pod "pod-72e9cbe1-a50e-497f-816d-eb0f23780851": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035946309s
STEP: Saw pod success
Aug 24 04:55:35.482: INFO: Pod "pod-72e9cbe1-a50e-497f-816d-eb0f23780851" satisfied condition "success or failure"
Aug 24 04:55:35.485: INFO: Trying to get logs from node iruya-worker pod pod-72e9cbe1-a50e-497f-816d-eb0f23780851 container test-container: 
STEP: delete the pod
Aug 24 04:55:35.524: INFO: Waiting for pod pod-72e9cbe1-a50e-497f-816d-eb0f23780851 to disappear
Aug 24 04:55:35.532: INFO: Pod pod-72e9cbe1-a50e-497f-816d-eb0f23780851 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:55:35.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3916" for this suite.
Aug 24 04:55:41.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:55:41.755: INFO: namespace emptydir-3916 deletion completed in 6.214730799s

• [SLOW TEST:10.447 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:55:41.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-76e68118-aa0d-4979-ac5a-e86a1f189119
STEP: Creating configMap with name cm-test-opt-upd-e5eec8a0-829a-44d2-9655-49daa4304f11
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-76e68118-aa0d-4979-ac5a-e86a1f189119
STEP: Updating configmap cm-test-opt-upd-e5eec8a0-829a-44d2-9655-49daa4304f11
STEP: Creating configMap with name cm-test-opt-create-848250e6-837e-4912-ae72-872f24288c4c
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:55:50.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5401" for this suite.
Aug 24 04:56:12.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:56:12.352: INFO: namespace projected-5401 deletion completed in 22.196065232s

• [SLOW TEST:30.596 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:56:12.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 24 04:56:17.023: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9571 pod-service-account-a34e31dc-7d3f-418e-bbde-140e68825e62 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 24 04:56:18.417: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9571 pod-service-account-a34e31dc-7d3f-418e-bbde-140e68825e62 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 24 04:56:19.814: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9571 pod-service-account-a34e31dc-7d3f-418e-bbde-140e68825e62 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:56:21.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9571" for this suite.
Aug 24 04:56:27.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:56:27.407: INFO: namespace svcaccounts-9571 deletion completed in 6.176183542s

• [SLOW TEST:15.054 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:56:27.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod busybox-cf0d3bf6-7dca-4d84-9373-9e7d65bf62d8 in namespace container-probe-1383
Aug 24 04:56:31.573: INFO: Started pod busybox-cf0d3bf6-7dca-4d84-9373-9e7d65bf62d8 in namespace container-probe-1383
STEP: checking the pod's current state and verifying that restartCount is present
Aug 24 04:56:31.579: INFO: Initial restart count of pod busybox-cf0d3bf6-7dca-4d84-9373-9e7d65bf62d8 is 0
Aug 24 04:57:17.754: INFO: Restart count of pod container-probe-1383/busybox-cf0d3bf6-7dca-4d84-9373-9e7d65bf62d8 is now 1 (46.175269594s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:57:17.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1383" for this suite.
Aug 24 04:57:23.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:57:23.967: INFO: namespace container-probe-1383 deletion completed in 6.167971312s

• [SLOW TEST:56.558 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:57:23.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:57:24.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9906" for this suite.
Aug 24 04:57:30.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:57:30.268: INFO: namespace services-9906 deletion completed in 6.189301935s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.300 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:57:30.271: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-baa80f48-b24c-47da-bfb0-b9aa5a6d835e in namespace container-probe-627
Aug 24 04:57:34.464: INFO: Started pod liveness-baa80f48-b24c-47da-bfb0-b9aa5a6d835e in namespace container-probe-627
STEP: checking the pod's current state and verifying that restartCount is present
Aug 24 04:57:34.467: INFO: Initial restart count of pod liveness-baa80f48-b24c-47da-bfb0-b9aa5a6d835e is 0
Aug 24 04:57:52.579: INFO: Restart count of pod container-probe-627/liveness-baa80f48-b24c-47da-bfb0-b9aa5a6d835e is now 1 (18.111443871s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:57:52.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-627" for this suite.
Aug 24 04:57:58.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:57:58.862: INFO: namespace container-probe-627 deletion completed in 6.215592133s

• [SLOW TEST:28.591 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:57:58.864: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Aug 24 04:57:58.999: INFO: Waiting up to 5m0s for pod "var-expansion-3293a09d-77aa-4b3c-a583-3d86b3f5a2f0" in namespace "var-expansion-6727" to be "success or failure"
Aug 24 04:57:59.004: INFO: Pod "var-expansion-3293a09d-77aa-4b3c-a583-3d86b3f5a2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.562176ms
Aug 24 04:58:01.013: INFO: Pod "var-expansion-3293a09d-77aa-4b3c-a583-3d86b3f5a2f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014411231s
Aug 24 04:58:03.020: INFO: Pod "var-expansion-3293a09d-77aa-4b3c-a583-3d86b3f5a2f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021223751s
STEP: Saw pod success
Aug 24 04:58:03.021: INFO: Pod "var-expansion-3293a09d-77aa-4b3c-a583-3d86b3f5a2f0" satisfied condition "success or failure"
Aug 24 04:58:03.025: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-3293a09d-77aa-4b3c-a583-3d86b3f5a2f0 container dapi-container: 
STEP: delete the pod
Aug 24 04:58:03.055: INFO: Waiting for pod var-expansion-3293a09d-77aa-4b3c-a583-3d86b3f5a2f0 to disappear
Aug 24 04:58:03.076: INFO: Pod var-expansion-3293a09d-77aa-4b3c-a583-3d86b3f5a2f0 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:58:03.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-6727" for this suite.
Aug 24 04:58:09.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:58:09.276: INFO: namespace var-expansion-6727 deletion completed in 6.191637465s

• [SLOW TEST:10.413 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:58:09.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 04:58:09.403: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d5ec104f-297c-4ea9-86ac-17bf41d39842" in namespace "projected-3955" to be "success or failure"
Aug 24 04:58:09.412: INFO: Pod "downwardapi-volume-d5ec104f-297c-4ea9-86ac-17bf41d39842": Phase="Pending", Reason="", readiness=false. Elapsed: 8.957554ms
Aug 24 04:58:11.682: INFO: Pod "downwardapi-volume-d5ec104f-297c-4ea9-86ac-17bf41d39842": Phase="Pending", Reason="", readiness=false. Elapsed: 2.27866567s
Aug 24 04:58:13.694: INFO: Pod "downwardapi-volume-d5ec104f-297c-4ea9-86ac-17bf41d39842": Phase="Running", Reason="", readiness=true. Elapsed: 4.291037385s
Aug 24 04:58:15.706: INFO: Pod "downwardapi-volume-d5ec104f-297c-4ea9-86ac-17bf41d39842": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.302900889s
STEP: Saw pod success
Aug 24 04:58:15.706: INFO: Pod "downwardapi-volume-d5ec104f-297c-4ea9-86ac-17bf41d39842" satisfied condition "success or failure"
Aug 24 04:58:15.711: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d5ec104f-297c-4ea9-86ac-17bf41d39842 container client-container: 
STEP: delete the pod
Aug 24 04:58:15.781: INFO: Waiting for pod downwardapi-volume-d5ec104f-297c-4ea9-86ac-17bf41d39842 to disappear
Aug 24 04:58:15.802: INFO: Pod downwardapi-volume-d5ec104f-297c-4ea9-86ac-17bf41d39842 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:58:15.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3955" for this suite.
Aug 24 04:58:21.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:58:22.006: INFO: namespace projected-3955 deletion completed in 6.190693614s

• [SLOW TEST:12.727 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:58:22.007: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 24 04:58:22.119: INFO: Waiting up to 5m0s for pod "pod-63118b5d-3b0b-4886-8847-710b1a9f6059" in namespace "emptydir-341" to be "success or failure"
Aug 24 04:58:22.127: INFO: Pod "pod-63118b5d-3b0b-4886-8847-710b1a9f6059": Phase="Pending", Reason="", readiness=false. Elapsed: 7.577809ms
Aug 24 04:58:24.135: INFO: Pod "pod-63118b5d-3b0b-4886-8847-710b1a9f6059": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015719265s
Aug 24 04:58:26.142: INFO: Pod "pod-63118b5d-3b0b-4886-8847-710b1a9f6059": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02330113s
STEP: Saw pod success
Aug 24 04:58:26.143: INFO: Pod "pod-63118b5d-3b0b-4886-8847-710b1a9f6059" satisfied condition "success or failure"
Aug 24 04:58:26.148: INFO: Trying to get logs from node iruya-worker2 pod pod-63118b5d-3b0b-4886-8847-710b1a9f6059 container test-container: 
STEP: delete the pod
Aug 24 04:58:26.184: INFO: Waiting for pod pod-63118b5d-3b0b-4886-8847-710b1a9f6059 to disappear
Aug 24 04:58:26.191: INFO: Pod pod-63118b5d-3b0b-4886-8847-710b1a9f6059 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:58:26.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-341" for this suite.
Aug 24 04:58:32.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:58:32.373: INFO: namespace emptydir-341 deletion completed in 6.172398215s

• [SLOW TEST:10.366 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:58:32.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 24 04:58:36.587: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:58:36.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3993" for this suite.
Aug 24 04:58:42.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:58:42.864: INFO: namespace container-runtime-3993 deletion completed in 6.190204932s

• [SLOW TEST:10.489 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:58:42.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-14e03530-3c26-49b7-b74f-62c855c3e8b6
STEP: Creating a pod to test consume secrets
Aug 24 04:58:43.032: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-689e592f-8712-4de1-a5e6-9cc1d9d6d222" in namespace "projected-2709" to be "success or failure"
Aug 24 04:58:43.094: INFO: Pod "pod-projected-secrets-689e592f-8712-4de1-a5e6-9cc1d9d6d222": Phase="Pending", Reason="", readiness=false. Elapsed: 61.864326ms
Aug 24 04:58:45.101: INFO: Pod "pod-projected-secrets-689e592f-8712-4de1-a5e6-9cc1d9d6d222": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068719418s
Aug 24 04:58:47.107: INFO: Pod "pod-projected-secrets-689e592f-8712-4de1-a5e6-9cc1d9d6d222": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074866691s
STEP: Saw pod success
Aug 24 04:58:47.107: INFO: Pod "pod-projected-secrets-689e592f-8712-4de1-a5e6-9cc1d9d6d222" satisfied condition "success or failure"
Aug 24 04:58:47.111: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-689e592f-8712-4de1-a5e6-9cc1d9d6d222 container projected-secret-volume-test: 
STEP: delete the pod
Aug 24 04:58:47.184: INFO: Waiting for pod pod-projected-secrets-689e592f-8712-4de1-a5e6-9cc1d9d6d222 to disappear
Aug 24 04:58:47.204: INFO: Pod pod-projected-secrets-689e592f-8712-4de1-a5e6-9cc1d9d6d222 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:58:47.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2709" for this suite.
Aug 24 04:58:53.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:58:53.416: INFO: namespace projected-2709 deletion completed in 6.205154197s

• [SLOW TEST:10.552 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:58:53.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-a8d7abeb-b8eb-495b-8e95-832b1b6bec1d
STEP: Creating a pod to test consume secrets
Aug 24 04:58:53.530: INFO: Waiting up to 5m0s for pod "pod-secrets-84085491-786e-4aea-9654-f2f616bac6ab" in namespace "secrets-8875" to be "success or failure"
Aug 24 04:58:53.550: INFO: Pod "pod-secrets-84085491-786e-4aea-9654-f2f616bac6ab": Phase="Pending", Reason="", readiness=false. Elapsed: 20.126477ms
Aug 24 04:58:55.556: INFO: Pod "pod-secrets-84085491-786e-4aea-9654-f2f616bac6ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025835972s
Aug 24 04:58:57.563: INFO: Pod "pod-secrets-84085491-786e-4aea-9654-f2f616bac6ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032719953s
STEP: Saw pod success
Aug 24 04:58:57.563: INFO: Pod "pod-secrets-84085491-786e-4aea-9654-f2f616bac6ab" satisfied condition "success or failure"
Aug 24 04:58:57.583: INFO: Trying to get logs from node iruya-worker pod pod-secrets-84085491-786e-4aea-9654-f2f616bac6ab container secret-volume-test: 
STEP: delete the pod
Aug 24 04:58:57.618: INFO: Waiting for pod pod-secrets-84085491-786e-4aea-9654-f2f616bac6ab to disappear
Aug 24 04:58:57.630: INFO: Pod pod-secrets-84085491-786e-4aea-9654-f2f616bac6ab no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 04:58:57.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8875" for this suite.
Aug 24 04:59:03.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 04:59:03.810: INFO: namespace secrets-8875 deletion completed in 6.170594722s

• [SLOW TEST:10.392 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 04:59:03.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 24 04:59:03.889: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9050,SelfLink:/api/v1/namespaces/watch-9050/configmaps/e2e-watch-test-configmap-a,UID:af11ea15-3944-4361-b71e-921b01935745,ResourceVersion:2293376,Generation:0,CreationTimestamp:2020-08-24 04:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 24 04:59:03.889: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9050,SelfLink:/api/v1/namespaces/watch-9050/configmaps/e2e-watch-test-configmap-a,UID:af11ea15-3944-4361-b71e-921b01935745,ResourceVersion:2293376,Generation:0,CreationTimestamp:2020-08-24 04:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 24 04:59:13.903: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9050,SelfLink:/api/v1/namespaces/watch-9050/configmaps/e2e-watch-test-configmap-a,UID:af11ea15-3944-4361-b71e-921b01935745,ResourceVersion:2293396,Generation:0,CreationTimestamp:2020-08-24 04:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 24 04:59:13.904: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9050,SelfLink:/api/v1/namespaces/watch-9050/configmaps/e2e-watch-test-configmap-a,UID:af11ea15-3944-4361-b71e-921b01935745,ResourceVersion:2293396,Generation:0,CreationTimestamp:2020-08-24 04:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 24 04:59:23.915: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9050,SelfLink:/api/v1/namespaces/watch-9050/configmaps/e2e-watch-test-configmap-a,UID:af11ea15-3944-4361-b71e-921b01935745,ResourceVersion:2293417,Generation:0,CreationTimestamp:2020-08-24 04:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 24 04:59:23.916: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9050,SelfLink:/api/v1/namespaces/watch-9050/configmaps/e2e-watch-test-configmap-a,UID:af11ea15-3944-4361-b71e-921b01935745,ResourceVersion:2293417,Generation:0,CreationTimestamp:2020-08-24 04:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 24 04:59:33.926: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9050,SelfLink:/api/v1/namespaces/watch-9050/configmaps/e2e-watch-test-configmap-a,UID:af11ea15-3944-4361-b71e-921b01935745,ResourceVersion:2293437,Generation:0,CreationTimestamp:2020-08-24 04:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 24 04:59:33.927: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9050,SelfLink:/api/v1/namespaces/watch-9050/configmaps/e2e-watch-test-configmap-a,UID:af11ea15-3944-4361-b71e-921b01935745,ResourceVersion:2293437,Generation:0,CreationTimestamp:2020-08-24 04:59:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 24 04:59:43.939: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9050,SelfLink:/api/v1/namespaces/watch-9050/configmaps/e2e-watch-test-configmap-b,UID:41c30f77-fb96-4eab-b04a-82b6996f61c3,ResourceVersion:2293458,Generation:0,CreationTimestamp:2020-08-24 04:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 24 04:59:43.940: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9050,SelfLink:/api/v1/namespaces/watch-9050/configmaps/e2e-watch-test-configmap-b,UID:41c30f77-fb96-4eab-b04a-82b6996f61c3,ResourceVersion:2293458,Generation:0,CreationTimestamp:2020-08-24 04:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 24 04:59:53.952: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9050,SelfLink:/api/v1/namespaces/watch-9050/configmaps/e2e-watch-test-configmap-b,UID:41c30f77-fb96-4eab-b04a-82b6996f61c3,ResourceVersion:2293479,Generation:0,CreationTimestamp:2020-08-24 04:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 24 04:59:53.953: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9050,SelfLink:/api/v1/namespaces/watch-9050/configmaps/e2e-watch-test-configmap-b,UID:41c30f77-fb96-4eab-b04a-82b6996f61c3,ResourceVersion:2293479,Generation:0,CreationTimestamp:2020-08-24 04:59:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:00:03.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9050" for this suite.
Aug 24 05:00:09.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:00:10.132: INFO: namespace watch-9050 deletion completed in 6.164262702s

• [SLOW TEST:66.321 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:00:10.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 05:00:10.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Aug 24 05:00:11.285: INFO: stderr: ""
Aug 24 05:00:11.285: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/arm\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:08:45Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:00:11.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3084" for this suite.
Aug 24 05:00:17.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:00:17.479: INFO: namespace kubectl-3084 deletion completed in 6.176226472s

• [SLOW TEST:7.345 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:00:17.484: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 05:00:17.591: INFO: Waiting up to 5m0s for pod "downwardapi-volume-36e59887-fd95-45d9-82e8-191d593ef549" in namespace "downward-api-5546" to be "success or failure"
Aug 24 05:00:17.614: INFO: Pod "downwardapi-volume-36e59887-fd95-45d9-82e8-191d593ef549": Phase="Pending", Reason="", readiness=false. Elapsed: 22.414453ms
Aug 24 05:00:19.622: INFO: Pod "downwardapi-volume-36e59887-fd95-45d9-82e8-191d593ef549": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03075373s
Aug 24 05:00:21.631: INFO: Pod "downwardapi-volume-36e59887-fd95-45d9-82e8-191d593ef549": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039018471s
STEP: Saw pod success
Aug 24 05:00:21.631: INFO: Pod "downwardapi-volume-36e59887-fd95-45d9-82e8-191d593ef549" satisfied condition "success or failure"
Aug 24 05:00:21.637: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-36e59887-fd95-45d9-82e8-191d593ef549 container client-container: 
STEP: delete the pod
Aug 24 05:00:21.658: INFO: Waiting for pod downwardapi-volume-36e59887-fd95-45d9-82e8-191d593ef549 to disappear
Aug 24 05:00:21.661: INFO: Pod downwardapi-volume-36e59887-fd95-45d9-82e8-191d593ef549 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:00:21.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5546" for this suite.
Aug 24 05:00:27.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:00:27.933: INFO: namespace downward-api-5546 deletion completed in 6.264298874s

• [SLOW TEST:10.450 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:00:27.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 24 05:00:27.989: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Aug 24 05:00:48.446: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 24 05:00:51.036: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733842048, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733842048, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733842048, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733842048, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 24 05:00:53.044: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733842048, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733842048, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733842048, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733842048, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 24 05:00:55.703: INFO: Waited 634.349941ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:00:56.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-196" for this suite.
Aug 24 05:01:02.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:01:02.410: INFO: namespace aggregator-196 deletion completed in 6.23516624s

• [SLOW TEST:34.476 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:01:02.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-49cd9a24-2692-4d41-bbe6-df55d2cd7e85
STEP: Creating configMap with name cm-test-opt-upd-132eaa1c-838c-431e-91d3-268d4febc5cb
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-49cd9a24-2692-4d41-bbe6-df55d2cd7e85
STEP: Updating configmap cm-test-opt-upd-132eaa1c-838c-431e-91d3-268d4febc5cb
STEP: Creating configMap with name cm-test-opt-create-17acaa4b-286d-4538-865e-5b26134d8cf2
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:01:10.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5333" for this suite.
Aug 24 05:01:34.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:01:35.003: INFO: namespace configmap-5333 deletion completed in 24.207071217s

• [SLOW TEST:32.591 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:01:35.009: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 24 05:01:39.693: INFO: Successfully updated pod "annotationupdate8e494d5f-9302-4496-95f7-c079bf5e7ae3"
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:01:43.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-959" for this suite.
Aug 24 05:02:05.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:02:05.911: INFO: namespace projected-959 deletion completed in 22.159853543s

• [SLOW TEST:30.903 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:02:05.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-399cb7e9-d520-4d61-a44b-d28d5e2db464
STEP: Creating a pod to test consume configMaps
Aug 24 05:02:06.012: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-60f92f0a-5c94-4679-a300-b8eb15132d82" in namespace "projected-6883" to be "success or failure"
Aug 24 05:02:06.030: INFO: Pod "pod-projected-configmaps-60f92f0a-5c94-4679-a300-b8eb15132d82": Phase="Pending", Reason="", readiness=false. Elapsed: 16.960265ms
Aug 24 05:02:08.038: INFO: Pod "pod-projected-configmaps-60f92f0a-5c94-4679-a300-b8eb15132d82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02493821s
Aug 24 05:02:10.413: INFO: Pod "pod-projected-configmaps-60f92f0a-5c94-4679-a300-b8eb15132d82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.400459104s
STEP: Saw pod success
Aug 24 05:02:10.413: INFO: Pod "pod-projected-configmaps-60f92f0a-5c94-4679-a300-b8eb15132d82" satisfied condition "success or failure"
Aug 24 05:02:10.419: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-60f92f0a-5c94-4679-a300-b8eb15132d82 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 24 05:02:11.182: INFO: Waiting for pod pod-projected-configmaps-60f92f0a-5c94-4679-a300-b8eb15132d82 to disappear
Aug 24 05:02:11.198: INFO: Pod pod-projected-configmaps-60f92f0a-5c94-4679-a300-b8eb15132d82 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:02:11.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6883" for this suite.
Aug 24 05:02:17.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:02:17.363: INFO: namespace projected-6883 deletion completed in 6.157098097s

• [SLOW TEST:11.451 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:02:17.368: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-z4bpb in namespace proxy-443
I0824 05:02:17.553285       7 runners.go:180] Created replication controller with name: proxy-service-z4bpb, namespace: proxy-443, replica count: 1
I0824 05:02:18.605103       7 runners.go:180] proxy-service-z4bpb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0824 05:02:19.605973       7 runners.go:180] proxy-service-z4bpb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0824 05:02:20.606743       7 runners.go:180] proxy-service-z4bpb Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0824 05:02:21.607664       7 runners.go:180] proxy-service-z4bpb Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0824 05:02:22.608420       7 runners.go:180] proxy-service-z4bpb Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 24 05:02:22.617: INFO: setup took 5.128344511s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Aug 24 05:02:22.626: INFO: (0) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:1080/proxy/: testt... (200; 14.846953ms)
Aug 24 05:02:22.633: INFO: (0) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname2/proxy/: bar (200; 14.674172ms)
Aug 24 05:02:22.633: INFO: (0) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 14.147463ms)
Aug 24 05:02:22.633: INFO: (0) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 14.629442ms)
Aug 24 05:02:22.633: INFO: (0) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z/proxy/: test (200; 15.116056ms)
Aug 24 05:02:22.633: INFO: (0) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname1/proxy/: foo (200; 14.987473ms)
Aug 24 05:02:22.636: INFO: (0) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:460/proxy/: tls baz (200; 18.481152ms)
Aug 24 05:02:22.637: INFO: (0) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname1/proxy/: tls baz (200; 19.63356ms)
Aug 24 05:02:22.638: INFO: (0) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:462/proxy/: tls qux (200; 20.089294ms)
Aug 24 05:02:22.639: INFO: (0) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname2/proxy/: tls qux (200; 21.014405ms)
Aug 24 05:02:22.640: INFO: (0) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: testtest (200; 9.215084ms)
Aug 24 05:02:22.650: INFO: (1) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:1080/proxy/: t... (200; 9.388695ms)
Aug 24 05:02:22.650: INFO: (1) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname1/proxy/: tls baz (200; 9.559617ms)
Aug 24 05:02:22.651: INFO: (1) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 9.688682ms)
Aug 24 05:02:22.656: INFO: (2) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: testtest (200; 8.381203ms)
Aug 24 05:02:22.660: INFO: (2) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname2/proxy/: tls qux (200; 8.847646ms)
Aug 24 05:02:22.660: INFO: (2) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname1/proxy/: foo (200; 8.948598ms)
Aug 24 05:02:22.660: INFO: (2) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname1/proxy/: foo (200; 8.953105ms)
Aug 24 05:02:22.660: INFO: (2) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:1080/proxy/: t... (200; 9.305594ms)
Aug 24 05:02:22.661: INFO: (2) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 9.469887ms)
Aug 24 05:02:22.661: INFO: (2) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname2/proxy/: bar (200; 9.57493ms)
Aug 24 05:02:22.661: INFO: (2) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:462/proxy/: tls qux (200; 9.674255ms)
Aug 24 05:02:22.667: INFO: (3) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:462/proxy/: tls qux (200; 6.267413ms)
Aug 24 05:02:22.668: INFO: (3) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 6.690621ms)
Aug 24 05:02:22.669: INFO: (3) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname2/proxy/: bar (200; 7.627494ms)
Aug 24 05:02:22.669: INFO: (3) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:1080/proxy/: t... (200; 7.964281ms)
Aug 24 05:02:22.669: INFO: (3) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:1080/proxy/: testtest (200; 8.172403ms)
Aug 24 05:02:22.670: INFO: (3) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: testt... (200; 4.994278ms)
Aug 24 05:02:22.680: INFO: (4) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 6.537729ms)
Aug 24 05:02:22.680: INFO: (4) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 6.603251ms)
Aug 24 05:02:22.680: INFO: (4) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z/proxy/: test (200; 6.699928ms)
Aug 24 05:02:22.680: INFO: (4) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname2/proxy/: tls qux (200; 6.955574ms)
Aug 24 05:02:22.681: INFO: (4) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname1/proxy/: tls baz (200; 7.073479ms)
Aug 24 05:02:22.681: INFO: (4) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname1/proxy/: foo (200; 7.198717ms)
Aug 24 05:02:22.681: INFO: (4) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: t... (200; 5.541853ms)
Aug 24 05:02:22.688: INFO: (5) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: testtest (200; 10.159696ms)
Aug 24 05:02:22.693: INFO: (5) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname2/proxy/: tls qux (200; 10.385896ms)
Aug 24 05:02:22.693: INFO: (5) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 10.498517ms)
Aug 24 05:02:22.694: INFO: (5) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:462/proxy/: tls qux (200; 10.270552ms)
Aug 24 05:02:22.699: INFO: (6) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname1/proxy/: foo (200; 5.216643ms)
Aug 24 05:02:22.700: INFO: (6) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:1080/proxy/: testt... (200; 6.471289ms)
Aug 24 05:02:22.701: INFO: (6) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname2/proxy/: bar (200; 7.120401ms)
Aug 24 05:02:22.701: INFO: (6) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z/proxy/: test (200; 7.020272ms)
Aug 24 05:02:22.701: INFO: (6) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 7.184516ms)
Aug 24 05:02:22.701: INFO: (6) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 7.188625ms)
Aug 24 05:02:22.701: INFO: (6) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 7.322252ms)
Aug 24 05:02:22.701: INFO: (6) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:462/proxy/: tls qux (200; 7.513502ms)
Aug 24 05:02:22.701: INFO: (6) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname1/proxy/: tls baz (200; 7.497924ms)
Aug 24 05:02:22.701: INFO: (6) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname1/proxy/: foo (200; 7.470892ms)
Aug 24 05:02:22.701: INFO: (6) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: testtest (200; 11.030245ms)
Aug 24 05:02:22.715: INFO: (7) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:460/proxy/: tls baz (200; 11.07703ms)
Aug 24 05:02:22.715: INFO: (7) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:1080/proxy/: t... (200; 11.566949ms)
Aug 24 05:02:22.715: INFO: (7) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname1/proxy/: foo (200; 11.387088ms)
Aug 24 05:02:22.719: INFO: (8) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:460/proxy/: tls baz (200; 4.241614ms)
Aug 24 05:02:22.720: INFO: (8) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname1/proxy/: foo (200; 4.372916ms)
Aug 24 05:02:22.720: INFO: (8) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:1080/proxy/: t... (200; 3.882926ms)
Aug 24 05:02:22.721: INFO: (8) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 5.491305ms)
Aug 24 05:02:22.722: INFO: (8) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:462/proxy/: tls qux (200; 6.714918ms)
Aug 24 05:02:22.722: INFO: (8) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z/proxy/: test (200; 6.862758ms)
Aug 24 05:02:22.722: INFO: (8) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname1/proxy/: foo (200; 7.181708ms)
Aug 24 05:02:22.722: INFO: (8) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 6.88769ms)
Aug 24 05:02:22.723: INFO: (8) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 7.069515ms)
Aug 24 05:02:22.723: INFO: (8) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname1/proxy/: tls baz (200; 7.049605ms)
Aug 24 05:02:22.723: INFO: (8) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname2/proxy/: bar (200; 7.204137ms)
Aug 24 05:02:22.723: INFO: (8) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname2/proxy/: tls qux (200; 7.034481ms)
Aug 24 05:02:22.723: INFO: (8) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: testtestt... (200; 8.823246ms)
Aug 24 05:02:22.734: INFO: (9) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z/proxy/: test (200; 9.050331ms)
Aug 24 05:02:22.734: INFO: (9) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: testtest (200; 10.727555ms)
Aug 24 05:02:22.748: INFO: (10) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:1080/proxy/: t... (200; 12.356407ms)
Aug 24 05:02:22.748: INFO: (10) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 11.00813ms)
Aug 24 05:02:22.748: INFO: (10) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:460/proxy/: tls baz (200; 11.222756ms)
Aug 24 05:02:22.748: INFO: (10) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: testtest (200; 4.777373ms)
Aug 24 05:02:22.755: INFO: (11) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname2/proxy/: bar (200; 5.667147ms)
Aug 24 05:02:22.755: INFO: (11) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:460/proxy/: tls baz (200; 5.128753ms)
Aug 24 05:02:22.756: INFO: (11) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 6.352604ms)
Aug 24 05:02:22.756: INFO: (11) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 6.239761ms)
Aug 24 05:02:22.756: INFO: (11) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname2/proxy/: tls qux (200; 6.492736ms)
Aug 24 05:02:22.757: INFO: (11) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: t... (200; 7.234666ms)
Aug 24 05:02:22.757: INFO: (11) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 7.337084ms)
Aug 24 05:02:22.758: INFO: (11) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname1/proxy/: tls baz (200; 7.840831ms)
Aug 24 05:02:22.758: INFO: (11) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname1/proxy/: foo (200; 7.88928ms)
Aug 24 05:02:22.759: INFO: (11) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname2/proxy/: bar (200; 8.845217ms)
Aug 24 05:02:22.759: INFO: (11) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 8.976887ms)
Aug 24 05:02:22.763: INFO: (12) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 3.764728ms)
Aug 24 05:02:22.765: INFO: (12) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname1/proxy/: foo (200; 5.564116ms)
Aug 24 05:02:22.765: INFO: (12) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname2/proxy/: bar (200; 5.756279ms)
Aug 24 05:02:22.765: INFO: (12) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: test (200; 6.463144ms)
Aug 24 05:02:22.766: INFO: (12) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:460/proxy/: tls baz (200; 6.738658ms)
Aug 24 05:02:22.766: INFO: (12) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:462/proxy/: tls qux (200; 6.957219ms)
Aug 24 05:02:22.767: INFO: (12) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname1/proxy/: tls baz (200; 6.918019ms)
Aug 24 05:02:22.767: INFO: (12) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 6.79096ms)
Aug 24 05:02:22.767: INFO: (12) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname2/proxy/: tls qux (200; 7.264854ms)
Aug 24 05:02:22.767: INFO: (12) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname2/proxy/: bar (200; 7.043944ms)
Aug 24 05:02:22.767: INFO: (12) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname1/proxy/: foo (200; 7.226311ms)
Aug 24 05:02:22.767: INFO: (12) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:1080/proxy/: t... (200; 7.054738ms)
Aug 24 05:02:22.767: INFO: (12) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:1080/proxy/: testt... (200; 4.557092ms)
Aug 24 05:02:22.774: INFO: (13) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 5.049444ms)
Aug 24 05:02:22.774: INFO: (13) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 6.363042ms)
Aug 24 05:02:22.775: INFO: (13) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname1/proxy/: tls baz (200; 5.408618ms)
Aug 24 05:02:22.775: INFO: (13) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 6.779692ms)
Aug 24 05:02:22.775: INFO: (13) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:1080/proxy/: testtest (200; 8.159151ms)
Aug 24 05:02:22.776: INFO: (13) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: t... (200; 6.154559ms)
Aug 24 05:02:22.783: INFO: (14) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: testtest (200; 6.374006ms)
Aug 24 05:02:22.784: INFO: (14) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname2/proxy/: tls qux (200; 6.757135ms)
Aug 24 05:02:22.784: INFO: (14) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:462/proxy/: tls qux (200; 6.64813ms)
Aug 24 05:02:22.784: INFO: (14) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname1/proxy/: tls baz (200; 6.962879ms)
Aug 24 05:02:22.784: INFO: (14) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname1/proxy/: foo (200; 6.921583ms)
Aug 24 05:02:22.785: INFO: (14) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 7.897617ms)
Aug 24 05:02:22.787: INFO: (14) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:460/proxy/: tls baz (200; 9.616413ms)
Aug 24 05:02:22.787: INFO: (14) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname1/proxy/: foo (200; 9.941962ms)
Aug 24 05:02:22.787: INFO: (14) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname2/proxy/: bar (200; 9.995913ms)
Aug 24 05:02:22.792: INFO: (15) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 4.172748ms)
Aug 24 05:02:22.793: INFO: (15) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname1/proxy/: foo (200; 5.155674ms)
Aug 24 05:02:22.793: INFO: (15) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 5.32883ms)
Aug 24 05:02:22.793: INFO: (15) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname2/proxy/: tls qux (200; 5.340575ms)
Aug 24 05:02:22.794: INFO: (15) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:462/proxy/: tls qux (200; 6.329818ms)
Aug 24 05:02:22.794: INFO: (15) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z/proxy/: test (200; 6.366452ms)
Aug 24 05:02:22.795: INFO: (15) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 7.25301ms)
Aug 24 05:02:22.795: INFO: (15) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:1080/proxy/: testt... (200; 7.250351ms)
Aug 24 05:02:22.795: INFO: (15) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname2/proxy/: bar (200; 7.457745ms)
Aug 24 05:02:22.795: INFO: (15) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname1/proxy/: foo (200; 7.5983ms)
Aug 24 05:02:22.795: INFO: (15) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 7.746846ms)
Aug 24 05:02:22.796: INFO: (15) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname2/proxy/: bar (200; 8.092249ms)
Aug 24 05:02:22.796: INFO: (15) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:460/proxy/: tls baz (200; 8.646523ms)
Aug 24 05:02:22.797: INFO: (15) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname1/proxy/: tls baz (200; 8.995347ms)
Aug 24 05:02:22.797: INFO: (15) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: test (200; 24.859717ms)
Aug 24 05:02:22.822: INFO: (16) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:1080/proxy/: testt... (200; 25.496671ms)
Aug 24 05:02:22.823: INFO: (16) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname2/proxy/: bar (200; 25.776705ms)
Aug 24 05:02:22.823: INFO: (16) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname1/proxy/: foo (200; 25.890112ms)
Aug 24 05:02:22.824: INFO: (16) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname2/proxy/: tls qux (200; 26.674428ms)
Aug 24 05:02:22.824: INFO: (16) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname1/proxy/: tls baz (200; 26.786487ms)
Aug 24 05:02:22.824: INFO: (16) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname2/proxy/: bar (200; 26.960813ms)
Aug 24 05:02:22.824: INFO: (16) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname1/proxy/: foo (200; 27.020486ms)
Aug 24 05:02:22.829: INFO: (17) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 4.125382ms)
Aug 24 05:02:22.830: INFO: (17) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname2/proxy/: bar (200; 5.402343ms)
Aug 24 05:02:22.831: INFO: (17) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:460/proxy/: tls baz (200; 5.965063ms)
Aug 24 05:02:22.831: INFO: (17) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z/proxy/: test (200; 6.016635ms)
Aug 24 05:02:22.831: INFO: (17) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:1080/proxy/: t... (200; 6.57697ms)
Aug 24 05:02:22.831: INFO: (17) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 6.436924ms)
Aug 24 05:02:22.832: INFO: (17) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:1080/proxy/: testt... (200; 4.267198ms)
Aug 24 05:02:22.840: INFO: (18) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:460/proxy/: tls baz (200; 4.929351ms)
Aug 24 05:02:22.840: INFO: (18) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z/proxy/: test (200; 5.147388ms)
Aug 24 05:02:22.840: INFO: (18) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname2/proxy/: bar (200; 5.273121ms)
Aug 24 05:02:22.841: INFO: (18) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname1/proxy/: tls baz (200; 5.163063ms)
Aug 24 05:02:22.841: INFO: (18) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:462/proxy/: tls qux (200; 6.268451ms)
Aug 24 05:02:22.841: INFO: (18) /api/v1/namespaces/proxy-443/services/https:proxy-service-z4bpb:tlsportname2/proxy/: tls qux (200; 6.606136ms)
Aug 24 05:02:22.842: INFO: (18) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:162/proxy/: bar (200; 6.633274ms)
Aug 24 05:02:22.842: INFO: (18) /api/v1/namespaces/proxy-443/pods/proxy-service-z4bpb-jc22z:1080/proxy/: testtesttest (200; 8.172954ms)
Aug 24 05:02:22.853: INFO: (19) /api/v1/namespaces/proxy-443/services/proxy-service-z4bpb:portname2/proxy/: bar (200; 8.302523ms)
Aug 24 05:02:22.853: INFO: (19) /api/v1/namespaces/proxy-443/pods/http:proxy-service-z4bpb-jc22z:160/proxy/: foo (200; 8.326301ms)
Aug 24 05:02:22.853: INFO: (19) /api/v1/namespaces/proxy-443/pods/https:proxy-service-z4bpb-jc22z:443/proxy/: t... (200; 8.43613ms)
Aug 24 05:02:22.854: INFO: (19) /api/v1/namespaces/proxy-443/services/http:proxy-service-z4bpb:portname2/proxy/: bar (200; 9.526362ms)
STEP: deleting ReplicationController proxy-service-z4bpb in namespace proxy-443, will wait for the garbage collector to delete the pods
Aug 24 05:02:22.917: INFO: Deleting ReplicationController proxy-service-z4bpb took: 9.446412ms
Aug 24 05:02:23.218: INFO: Terminating ReplicationController proxy-service-z4bpb pods took: 301.104884ms
[AfterEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:02:33.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-443" for this suite.
Aug 24 05:02:39.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:02:39.727: INFO: namespace proxy-443 deletion completed in 6.293705394s

• [SLOW TEST:22.359 seconds]
[sig-network] Proxy
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:02:39.728: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 24 05:02:39.818: INFO: Waiting up to 5m0s for pod "downward-api-a3d1a074-e6fe-47ff-ab43-73d11c76f381" in namespace "downward-api-2235" to be "success or failure"
Aug 24 05:02:39.827: INFO: Pod "downward-api-a3d1a074-e6fe-47ff-ab43-73d11c76f381": Phase="Pending", Reason="", readiness=false. Elapsed: 8.563439ms
Aug 24 05:02:41.837: INFO: Pod "downward-api-a3d1a074-e6fe-47ff-ab43-73d11c76f381": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017760316s
Aug 24 05:02:43.843: INFO: Pod "downward-api-a3d1a074-e6fe-47ff-ab43-73d11c76f381": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024597914s
STEP: Saw pod success
Aug 24 05:02:43.844: INFO: Pod "downward-api-a3d1a074-e6fe-47ff-ab43-73d11c76f381" satisfied condition "success or failure"
Aug 24 05:02:43.856: INFO: Trying to get logs from node iruya-worker2 pod downward-api-a3d1a074-e6fe-47ff-ab43-73d11c76f381 container dapi-container: 
STEP: delete the pod
Aug 24 05:02:43.904: INFO: Waiting for pod downward-api-a3d1a074-e6fe-47ff-ab43-73d11c76f381 to disappear
Aug 24 05:02:43.922: INFO: Pod downward-api-a3d1a074-e6fe-47ff-ab43-73d11c76f381 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:02:43.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2235" for this suite.
Aug 24 05:02:49.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:02:50.121: INFO: namespace downward-api-2235 deletion completed in 6.189112327s

• [SLOW TEST:10.393 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:02:50.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-gghj
STEP: Creating a pod to test atomic-volume-subpath
Aug 24 05:02:50.213: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-gghj" in namespace "subpath-699" to be "success or failure"
Aug 24 05:02:50.227: INFO: Pod "pod-subpath-test-configmap-gghj": Phase="Pending", Reason="", readiness=false. Elapsed: 13.709972ms
Aug 24 05:02:52.502: INFO: Pod "pod-subpath-test-configmap-gghj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.288675903s
Aug 24 05:02:54.509: INFO: Pod "pod-subpath-test-configmap-gghj": Phase="Running", Reason="", readiness=true. Elapsed: 4.296019732s
Aug 24 05:02:56.517: INFO: Pod "pod-subpath-test-configmap-gghj": Phase="Running", Reason="", readiness=true. Elapsed: 6.303696392s
Aug 24 05:02:58.525: INFO: Pod "pod-subpath-test-configmap-gghj": Phase="Running", Reason="", readiness=true. Elapsed: 8.311639863s
Aug 24 05:03:00.533: INFO: Pod "pod-subpath-test-configmap-gghj": Phase="Running", Reason="", readiness=true. Elapsed: 10.319788559s
Aug 24 05:03:02.564: INFO: Pod "pod-subpath-test-configmap-gghj": Phase="Running", Reason="", readiness=true. Elapsed: 12.35069749s
Aug 24 05:03:04.571: INFO: Pod "pod-subpath-test-configmap-gghj": Phase="Running", Reason="", readiness=true. Elapsed: 14.357525104s
Aug 24 05:03:06.577: INFO: Pod "pod-subpath-test-configmap-gghj": Phase="Running", Reason="", readiness=true. Elapsed: 16.363824759s
Aug 24 05:03:08.583: INFO: Pod "pod-subpath-test-configmap-gghj": Phase="Running", Reason="", readiness=true. Elapsed: 18.369702525s
Aug 24 05:03:10.590: INFO: Pod "pod-subpath-test-configmap-gghj": Phase="Running", Reason="", readiness=true. Elapsed: 20.376835314s
Aug 24 05:03:12.595: INFO: Pod "pod-subpath-test-configmap-gghj": Phase="Running", Reason="", readiness=true. Elapsed: 22.381973835s
Aug 24 05:03:14.603: INFO: Pod "pod-subpath-test-configmap-gghj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.389344916s
STEP: Saw pod success
Aug 24 05:03:14.603: INFO: Pod "pod-subpath-test-configmap-gghj" satisfied condition "success or failure"
Aug 24 05:03:14.607: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-gghj container test-container-subpath-configmap-gghj: 
STEP: delete the pod
Aug 24 05:03:14.630: INFO: Waiting for pod pod-subpath-test-configmap-gghj to disappear
Aug 24 05:03:14.657: INFO: Pod pod-subpath-test-configmap-gghj no longer exists
STEP: Deleting pod pod-subpath-test-configmap-gghj
Aug 24 05:03:14.657: INFO: Deleting pod "pod-subpath-test-configmap-gghj" in namespace "subpath-699"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:03:14.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-699" for this suite.
Aug 24 05:03:20.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:03:20.838: INFO: namespace subpath-699 deletion completed in 6.169715946s

• [SLOW TEST:30.715 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:03:20.849: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 05:03:20.955: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6300e317-8502-4d23-8df0-1056dca3dca1" in namespace "downward-api-3913" to be "success or failure"
Aug 24 05:03:20.973: INFO: Pod "downwardapi-volume-6300e317-8502-4d23-8df0-1056dca3dca1": Phase="Pending", Reason="", readiness=false. Elapsed: 17.718547ms
Aug 24 05:03:22.981: INFO: Pod "downwardapi-volume-6300e317-8502-4d23-8df0-1056dca3dca1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025749665s
Aug 24 05:03:24.988: INFO: Pod "downwardapi-volume-6300e317-8502-4d23-8df0-1056dca3dca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033183675s
STEP: Saw pod success
Aug 24 05:03:24.989: INFO: Pod "downwardapi-volume-6300e317-8502-4d23-8df0-1056dca3dca1" satisfied condition "success or failure"
Aug 24 05:03:24.994: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6300e317-8502-4d23-8df0-1056dca3dca1 container client-container: 
STEP: delete the pod
Aug 24 05:03:25.055: INFO: Waiting for pod downwardapi-volume-6300e317-8502-4d23-8df0-1056dca3dca1 to disappear
Aug 24 05:03:25.066: INFO: Pod downwardapi-volume-6300e317-8502-4d23-8df0-1056dca3dca1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:03:25.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3913" for this suite.
Aug 24 05:03:31.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:03:31.302: INFO: namespace downward-api-3913 deletion completed in 6.225122107s

• [SLOW TEST:10.453 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:03:31.306: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-8803
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Aug 24 05:03:31.426: INFO: Found 0 stateful pods, waiting for 3
Aug 24 05:03:41.464: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 05:03:41.465: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 05:03:41.465: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Aug 24 05:03:51.434: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 05:03:51.435: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 05:03:51.435: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 05:03:51.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8803 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 24 05:03:55.517: INFO: stderr: "I0824 05:03:55.359512    1870 log.go:172] (0x2a57dc0) (0x2a57e30) Create stream\nI0824 05:03:55.361234    1870 log.go:172] (0x2a57dc0) (0x2a57e30) Stream added, broadcasting: 1\nI0824 05:03:55.377156    1870 log.go:172] (0x2a57dc0) Reply frame received for 1\nI0824 05:03:55.377795    1870 log.go:172] (0x2a57dc0) (0x284a1c0) Create stream\nI0824 05:03:55.377898    1870 log.go:172] (0x2a57dc0) (0x284a1c0) Stream added, broadcasting: 3\nI0824 05:03:55.379252    1870 log.go:172] (0x2a57dc0) Reply frame received for 3\nI0824 05:03:55.379479    1870 log.go:172] (0x2a57dc0) (0x28100e0) Create stream\nI0824 05:03:55.379548    1870 log.go:172] (0x2a57dc0) (0x28100e0) Stream added, broadcasting: 5\nI0824 05:03:55.380812    1870 log.go:172] (0x2a57dc0) Reply frame received for 5\nI0824 05:03:55.461097    1870 log.go:172] (0x2a57dc0) Data frame received for 5\nI0824 05:03:55.461453    1870 log.go:172] (0x28100e0) (5) Data frame handling\nI0824 05:03:55.462049    1870 log.go:172] (0x28100e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0824 05:03:55.492677    1870 log.go:172] (0x2a57dc0) Data frame received for 3\nI0824 05:03:55.493052    1870 log.go:172] (0x284a1c0) (3) Data frame handling\nI0824 05:03:55.493290    1870 log.go:172] (0x284a1c0) (3) Data frame sent\nI0824 05:03:55.493479    1870 log.go:172] (0x2a57dc0) Data frame received for 3\nI0824 05:03:55.493683    1870 log.go:172] (0x284a1c0) (3) Data frame handling\nI0824 05:03:55.493981    1870 log.go:172] (0x2a57dc0) Data frame received for 5\nI0824 05:03:55.494245    1870 log.go:172] (0x28100e0) (5) Data frame handling\nI0824 05:03:55.494904    1870 log.go:172] (0x2a57dc0) Data frame received for 1\nI0824 05:03:55.495062    1870 log.go:172] (0x2a57e30) (1) Data frame handling\nI0824 05:03:55.495251    1870 log.go:172] (0x2a57e30) (1) Data frame sent\nI0824 05:03:55.496070    1870 log.go:172] (0x2a57dc0) (0x2a57e30) Stream removed, broadcasting: 1\nI0824 05:03:55.498999    1870 log.go:172] (0x2a57dc0) Go away received\nI0824 05:03:55.502856    1870 log.go:172] (0x2a57dc0) (0x2a57e30) Stream removed, broadcasting: 1\nI0824 05:03:55.503272    1870 log.go:172] (0x2a57dc0) (0x284a1c0) Stream removed, broadcasting: 3\nI0824 05:03:55.503610    1870 log.go:172] (0x2a57dc0) (0x28100e0) Stream removed, broadcasting: 5\n"
Aug 24 05:03:55.518: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 24 05:03:55.518: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Aug 24 05:04:05.566: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Aug 24 05:04:15.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8803 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:04:16.988: INFO: stderr: "I0824 05:04:16.876364    1900 log.go:172] (0x2814a80) (0x2688070) Create stream\nI0824 05:04:16.878452    1900 log.go:172] (0x2814a80) (0x2688070) Stream added, broadcasting: 1\nI0824 05:04:16.887031    1900 log.go:172] (0x2814a80) Reply frame received for 1\nI0824 05:04:16.887630    1900 log.go:172] (0x2814a80) (0x2acc070) Create stream\nI0824 05:04:16.887725    1900 log.go:172] (0x2814a80) (0x2acc070) Stream added, broadcasting: 3\nI0824 05:04:16.889534    1900 log.go:172] (0x2814a80) Reply frame received for 3\nI0824 05:04:16.890070    1900 log.go:172] (0x2814a80) (0x29020e0) Create stream\nI0824 05:04:16.890220    1900 log.go:172] (0x2814a80) (0x29020e0) Stream added, broadcasting: 5\nI0824 05:04:16.891867    1900 log.go:172] (0x2814a80) Reply frame received for 5\nI0824 05:04:16.964658    1900 log.go:172] (0x2814a80) Data frame received for 3\nI0824 05:04:16.965530    1900 log.go:172] (0x2814a80) Data frame received for 5\nI0824 05:04:16.965735    1900 log.go:172] (0x29020e0) (5) Data frame handling\nI0824 05:04:16.965924    1900 log.go:172] (0x2814a80) Data frame received for 1\nI0824 05:04:16.966156    1900 log.go:172] (0x2688070) (1) Data frame handling\nI0824 05:04:16.966467    1900 log.go:172] (0x2acc070) (3) Data frame handling\nI0824 05:04:16.967229    1900 log.go:172] (0x29020e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0824 05:04:16.967520    1900 log.go:172] (0x2acc070) (3) Data frame sent\nI0824 05:04:16.968083    1900 log.go:172] (0x2688070) (1) Data frame sent\nI0824 05:04:16.968306    1900 log.go:172] (0x2814a80) Data frame received for 3\nI0824 05:04:16.968509    1900 log.go:172] (0x2acc070) (3) Data frame handling\nI0824 05:04:16.968810    1900 log.go:172] (0x2814a80) Data frame received for 5\nI0824 05:04:16.969422    1900 log.go:172] (0x2814a80) (0x2688070) Stream removed, broadcasting: 1\nI0824 05:04:16.970184    1900 log.go:172] (0x29020e0) (5) Data frame handling\nI0824 05:04:16.972091    1900 log.go:172] (0x2814a80) Go away received\nI0824 05:04:16.975318    1900 log.go:172] (0x2814a80) (0x2688070) Stream removed, broadcasting: 1\nI0824 05:04:16.975556    1900 log.go:172] (0x2814a80) (0x2acc070) Stream removed, broadcasting: 3\nI0824 05:04:16.975746    1900 log.go:172] (0x2814a80) (0x29020e0) Stream removed, broadcasting: 5\n"
Aug 24 05:04:16.990: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 24 05:04:16.990: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 24 05:04:37.034: INFO: Waiting for StatefulSet statefulset-8803/ss2 to complete update
Aug 24 05:04:37.035: INFO: Waiting for Pod statefulset-8803/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Aug 24 05:04:47.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8803 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 24 05:04:48.525: INFO: stderr: "I0824 05:04:48.358989    1923 log.go:172] (0x2744770) (0x27447e0) Create stream\nI0824 05:04:48.361513    1923 log.go:172] (0x2744770) (0x27447e0) Stream added, broadcasting: 1\nI0824 05:04:48.376120    1923 log.go:172] (0x2744770) Reply frame received for 1\nI0824 05:04:48.376660    1923 log.go:172] (0x2744770) (0x2886000) Create stream\nI0824 05:04:48.376781    1923 log.go:172] (0x2744770) (0x2886000) Stream added, broadcasting: 3\nI0824 05:04:48.378223    1923 log.go:172] (0x2744770) Reply frame received for 3\nI0824 05:04:48.378446    1923 log.go:172] (0x2744770) (0x2832150) Create stream\nI0824 05:04:48.378513    1923 log.go:172] (0x2744770) (0x2832150) Stream added, broadcasting: 5\nI0824 05:04:48.379515    1923 log.go:172] (0x2744770) Reply frame received for 5\nI0824 05:04:48.466783    1923 log.go:172] (0x2744770) Data frame received for 5\nI0824 05:04:48.467192    1923 log.go:172] (0x2832150) (5) Data frame handling\nI0824 05:04:48.467939    1923 log.go:172] (0x2832150) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0824 05:04:48.502935    1923 log.go:172] (0x2744770) Data frame received for 3\nI0824 05:04:48.503144    1923 log.go:172] (0x2744770) Data frame received for 5\nI0824 05:04:48.503309    1923 log.go:172] (0x2832150) (5) Data frame handling\nI0824 05:04:48.503404    1923 log.go:172] (0x2886000) (3) Data frame handling\nI0824 05:04:48.503568    1923 log.go:172] (0x2886000) (3) Data frame sent\nI0824 05:04:48.503758    1923 log.go:172] (0x2744770) Data frame received for 3\nI0824 05:04:48.503907    1923 log.go:172] (0x2886000) (3) Data frame handling\nI0824 05:04:48.504891    1923 log.go:172] (0x2744770) Data frame received for 1\nI0824 05:04:48.505002    1923 log.go:172] (0x27447e0) (1) Data frame handling\nI0824 05:04:48.505108    1923 log.go:172] (0x27447e0) (1) Data frame sent\nI0824 05:04:48.506373    1923 log.go:172] (0x2744770) (0x27447e0) Stream removed, broadcasting: 1\nI0824 05:04:48.509108    1923 log.go:172] (0x2744770) Go away received\nI0824 05:04:48.512117    1923 log.go:172] (0x2744770) (0x27447e0) Stream removed, broadcasting: 1\nI0824 05:04:48.512468    1923 log.go:172] (0x2744770) (0x2886000) Stream removed, broadcasting: 3\nI0824 05:04:48.512700    1923 log.go:172] (0x2744770) (0x2832150) Stream removed, broadcasting: 5\n"
Aug 24 05:04:48.526: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 24 05:04:48.526: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 24 05:04:58.631: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Aug 24 05:05:08.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8803 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:05:10.151: INFO: stderr: "I0824 05:05:09.983039    1946 log.go:172] (0x2694070) (0x26940e0) Create stream\nI0824 05:05:09.988540    1946 log.go:172] (0x2694070) (0x26940e0) Stream added, broadcasting: 1\nI0824 05:05:10.001454    1946 log.go:172] (0x2694070) Reply frame received for 1\nI0824 05:05:10.001906    1946 log.go:172] (0x2694070) (0x2b2e000) Create stream\nI0824 05:05:10.001973    1946 log.go:172] (0x2694070) (0x2b2e000) Stream added, broadcasting: 3\nI0824 05:05:10.003012    1946 log.go:172] (0x2694070) Reply frame received for 3\nI0824 05:05:10.003206    1946 log.go:172] (0x2694070) (0x2b2e070) Create stream\nI0824 05:05:10.003256    1946 log.go:172] (0x2694070) (0x2b2e070) Stream added, broadcasting: 5\nI0824 05:05:10.004259    1946 log.go:172] (0x2694070) Reply frame received for 5\nI0824 05:05:10.129960    1946 log.go:172] (0x2694070) Data frame received for 3\nI0824 05:05:10.130663    1946 log.go:172] (0x2694070) Data frame received for 5\nI0824 05:05:10.130963    1946 log.go:172] (0x2694070) Data frame received for 1\nI0824 05:05:10.131319    1946 log.go:172] (0x26940e0) (1) Data frame handling\nI0824 05:05:10.131529    1946 log.go:172] (0x2b2e000) (3) Data frame handling\nI0824 05:05:10.132081    1946 log.go:172] (0x2b2e070) (5) Data frame handling\nI0824 05:05:10.133023    1946 log.go:172] (0x2b2e000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0824 05:05:10.133286    1946 log.go:172] (0x26940e0) (1) Data frame sent\nI0824 05:05:10.133669    1946 log.go:172] (0x2694070) Data frame received for 3\nI0824 05:05:10.133778    1946 log.go:172] (0x2b2e000) (3) Data frame handling\nI0824 05:05:10.134013    1946 log.go:172] (0x2b2e070) (5) Data frame sent\nI0824 05:05:10.134179    1946 log.go:172] (0x2694070) Data frame received for 5\nI0824 05:05:10.134262    1946 log.go:172] (0x2b2e070) (5) Data frame handling\nI0824 05:05:10.135125    1946 log.go:172] (0x2694070) (0x26940e0) Stream removed, broadcasting: 1\nI0824 05:05:10.135418    1946 log.go:172] (0x2694070) Go away received\nI0824 05:05:10.138150    1946 log.go:172] (0x2694070) (0x26940e0) Stream removed, broadcasting: 1\nI0824 05:05:10.138379    1946 log.go:172] (0x2694070) (0x2b2e000) Stream removed, broadcasting: 3\nI0824 05:05:10.138538    1946 log.go:172] (0x2694070) (0x2b2e070) Stream removed, broadcasting: 5\n"
Aug 24 05:05:10.152: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 24 05:05:10.152: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 24 05:05:20.189: INFO: Waiting for StatefulSet statefulset-8803/ss2 to complete update
Aug 24 05:05:20.189: INFO: Waiting for Pod statefulset-8803/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 24 05:05:20.189: INFO: Waiting for Pod statefulset-8803/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 24 05:05:20.189: INFO: Waiting for Pod statefulset-8803/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 24 05:05:30.204: INFO: Waiting for StatefulSet statefulset-8803/ss2 to complete update
Aug 24 05:05:30.205: INFO: Waiting for Pod statefulset-8803/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Aug 24 05:05:40.206: INFO: Waiting for StatefulSet statefulset-8803/ss2 to complete update
Aug 24 05:05:40.206: INFO: Waiting for Pod statefulset-8803/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 24 05:05:50.205: INFO: Deleting all statefulset in ns statefulset-8803
Aug 24 05:05:50.210: INFO: Scaling statefulset ss2 to 0
Aug 24 05:06:20.234: INFO: Waiting for statefulset status.replicas updated to 0
Aug 24 05:06:20.239: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:06:20.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-8803" for this suite.
Aug 24 05:06:26.297: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:06:26.445: INFO: namespace statefulset-8803 deletion completed in 6.186034089s

• [SLOW TEST:175.140 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:06:26.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-6107
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6107 to expose endpoints map[]
Aug 24 05:06:26.572: INFO: successfully validated that service multi-endpoint-test in namespace services-6107 exposes endpoints map[] (30.626904ms elapsed)
STEP: Creating pod pod1 in namespace services-6107
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6107 to expose endpoints map[pod1:[100]]
Aug 24 05:06:29.782: INFO: successfully validated that service multi-endpoint-test in namespace services-6107 exposes endpoints map[pod1:[100]] (3.202591467s elapsed)
STEP: Creating pod pod2 in namespace services-6107
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6107 to expose endpoints map[pod1:[100] pod2:[101]]
Aug 24 05:06:33.905: INFO: successfully validated that service multi-endpoint-test in namespace services-6107 exposes endpoints map[pod1:[100] pod2:[101]] (4.108946009s elapsed)
STEP: Deleting pod pod1 in namespace services-6107
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6107 to expose endpoints map[pod2:[101]]
Aug 24 05:06:33.974: INFO: successfully validated that service multi-endpoint-test in namespace services-6107 exposes endpoints map[pod2:[101]] (62.084958ms elapsed)
STEP: Deleting pod pod2 in namespace services-6107
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6107 to expose endpoints map[]
Aug 24 05:06:33.987: INFO: successfully validated that service multi-endpoint-test in namespace services-6107 exposes endpoints map[] (7.489502ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:06:34.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6107" for this suite.
Aug 24 05:06:56.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:06:56.394: INFO: namespace services-6107 deletion completed in 22.374396358s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:29.945 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:06:56.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 05:06:56.516: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a3e9a79-930c-4fe2-8d08-2b73d10d3916" in namespace "downward-api-9433" to be "success or failure"
Aug 24 05:06:56.533: INFO: Pod "downwardapi-volume-4a3e9a79-930c-4fe2-8d08-2b73d10d3916": Phase="Pending", Reason="", readiness=false. Elapsed: 16.779855ms
Aug 24 05:06:58.540: INFO: Pod "downwardapi-volume-4a3e9a79-930c-4fe2-8d08-2b73d10d3916": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023376305s
Aug 24 05:07:00.547: INFO: Pod "downwardapi-volume-4a3e9a79-930c-4fe2-8d08-2b73d10d3916": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030983503s
STEP: Saw pod success
Aug 24 05:07:00.547: INFO: Pod "downwardapi-volume-4a3e9a79-930c-4fe2-8d08-2b73d10d3916" satisfied condition "success or failure"
Aug 24 05:07:00.552: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-4a3e9a79-930c-4fe2-8d08-2b73d10d3916 container client-container: 
STEP: delete the pod
Aug 24 05:07:00.592: INFO: Waiting for pod downwardapi-volume-4a3e9a79-930c-4fe2-8d08-2b73d10d3916 to disappear
Aug 24 05:07:00.611: INFO: Pod downwardapi-volume-4a3e9a79-930c-4fe2-8d08-2b73d10d3916 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:07:00.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9433" for this suite.
Aug 24 05:07:06.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:07:06.831: INFO: namespace downward-api-9433 deletion completed in 6.210781383s

• [SLOW TEST:10.431 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:07:06.835: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-downwardapi-shbh
STEP: Creating a pod to test atomic-volume-subpath
Aug 24 05:07:06.966: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-shbh" in namespace "subpath-5682" to be "success or failure"
Aug 24 05:07:06.976: INFO: Pod "pod-subpath-test-downwardapi-shbh": Phase="Pending", Reason="", readiness=false. Elapsed: 9.425764ms
Aug 24 05:07:08.982: INFO: Pod "pod-subpath-test-downwardapi-shbh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016223706s
Aug 24 05:07:10.990: INFO: Pod "pod-subpath-test-downwardapi-shbh": Phase="Running", Reason="", readiness=true. Elapsed: 4.023612521s
Aug 24 05:07:12.997: INFO: Pod "pod-subpath-test-downwardapi-shbh": Phase="Running", Reason="", readiness=true. Elapsed: 6.030955588s
Aug 24 05:07:15.005: INFO: Pod "pod-subpath-test-downwardapi-shbh": Phase="Running", Reason="", readiness=true. Elapsed: 8.038890651s
Aug 24 05:07:17.012: INFO: Pod "pod-subpath-test-downwardapi-shbh": Phase="Running", Reason="", readiness=true. Elapsed: 10.046151875s
Aug 24 05:07:19.020: INFO: Pod "pod-subpath-test-downwardapi-shbh": Phase="Running", Reason="", readiness=true. Elapsed: 12.053435272s
Aug 24 05:07:21.027: INFO: Pod "pod-subpath-test-downwardapi-shbh": Phase="Running", Reason="", readiness=true. Elapsed: 14.06107052s
Aug 24 05:07:23.035: INFO: Pod "pod-subpath-test-downwardapi-shbh": Phase="Running", Reason="", readiness=true. Elapsed: 16.068696241s
Aug 24 05:07:25.042: INFO: Pod "pod-subpath-test-downwardapi-shbh": Phase="Running", Reason="", readiness=true. Elapsed: 18.076184405s
Aug 24 05:07:27.050: INFO: Pod "pod-subpath-test-downwardapi-shbh": Phase="Running", Reason="", readiness=true. Elapsed: 20.083801031s
Aug 24 05:07:29.057: INFO: Pod "pod-subpath-test-downwardapi-shbh": Phase="Running", Reason="", readiness=true. Elapsed: 22.090790102s
Aug 24 05:07:31.064: INFO: Pod "pod-subpath-test-downwardapi-shbh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.097618402s
STEP: Saw pod success
Aug 24 05:07:31.064: INFO: Pod "pod-subpath-test-downwardapi-shbh" satisfied condition "success or failure"
Aug 24 05:07:31.068: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-shbh container test-container-subpath-downwardapi-shbh: 
STEP: delete the pod
Aug 24 05:07:31.098: INFO: Waiting for pod pod-subpath-test-downwardapi-shbh to disappear
Aug 24 05:07:31.121: INFO: Pod pod-subpath-test-downwardapi-shbh no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-shbh
Aug 24 05:07:31.121: INFO: Deleting pod "pod-subpath-test-downwardapi-shbh" in namespace "subpath-5682"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:07:31.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5682" for this suite.
Aug 24 05:07:37.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:07:37.352: INFO: namespace subpath-5682 deletion completed in 6.218026299s

• [SLOW TEST:30.517 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:07:37.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should scale a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Aug 24 05:07:37.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-282'
Aug 24 05:07:39.065: INFO: stderr: ""
Aug 24 05:07:39.065: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 24 05:07:39.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-282'
Aug 24 05:07:40.247: INFO: stderr: ""
Aug 24 05:07:40.247: INFO: stdout: "update-demo-nautilus-gx8dx update-demo-nautilus-xvjkx "
Aug 24 05:07:40.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx8dx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-282'
Aug 24 05:07:41.371: INFO: stderr: ""
Aug 24 05:07:41.372: INFO: stdout: ""
Aug 24 05:07:41.372: INFO: update-demo-nautilus-gx8dx is created but not running
Aug 24 05:07:46.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-282'
Aug 24 05:07:47.547: INFO: stderr: ""
Aug 24 05:07:47.547: INFO: stdout: "update-demo-nautilus-gx8dx update-demo-nautilus-xvjkx "
Aug 24 05:07:47.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx8dx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-282'
Aug 24 05:07:48.679: INFO: stderr: ""
Aug 24 05:07:48.679: INFO: stdout: "true"
Aug 24 05:07:48.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx8dx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-282'
Aug 24 05:07:49.835: INFO: stderr: ""
Aug 24 05:07:49.836: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 24 05:07:49.836: INFO: validating pod update-demo-nautilus-gx8dx
Aug 24 05:07:49.843: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 24 05:07:49.843: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 24 05:07:49.844: INFO: update-demo-nautilus-gx8dx is verified up and running
Aug 24 05:07:49.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xvjkx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-282'
Aug 24 05:07:51.056: INFO: stderr: ""
Aug 24 05:07:51.056: INFO: stdout: "true"
Aug 24 05:07:51.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xvjkx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-282'
Aug 24 05:07:52.206: INFO: stderr: ""
Aug 24 05:07:52.206: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 24 05:07:52.206: INFO: validating pod update-demo-nautilus-xvjkx
Aug 24 05:07:52.211: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 24 05:07:52.212: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 24 05:07:52.212: INFO: update-demo-nautilus-xvjkx is verified up and running
STEP: scaling down the replication controller
Aug 24 05:07:52.221: INFO: scanned /root for discovery docs: 
Aug 24 05:07:52.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-282'
Aug 24 05:07:54.531: INFO: stderr: ""
Aug 24 05:07:54.531: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 24 05:07:54.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-282'
Aug 24 05:07:55.719: INFO: stderr: ""
Aug 24 05:07:55.719: INFO: stdout: "update-demo-nautilus-gx8dx update-demo-nautilus-xvjkx "
STEP: Replicas for name=update-demo: expected=1 actual=2
Aug 24 05:08:00.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-282'
Aug 24 05:08:01.884: INFO: stderr: ""
Aug 24 05:08:01.884: INFO: stdout: "update-demo-nautilus-gx8dx "
Aug 24 05:08:01.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx8dx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-282'
Aug 24 05:08:03.042: INFO: stderr: ""
Aug 24 05:08:03.042: INFO: stdout: "true"
Aug 24 05:08:03.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx8dx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-282'
Aug 24 05:08:04.200: INFO: stderr: ""
Aug 24 05:08:04.200: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 24 05:08:04.200: INFO: validating pod update-demo-nautilus-gx8dx
Aug 24 05:08:04.205: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 24 05:08:04.205: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 24 05:08:04.205: INFO: update-demo-nautilus-gx8dx is verified up and running
STEP: scaling up the replication controller
Aug 24 05:08:04.213: INFO: scanned /root for discovery docs: 
Aug 24 05:08:04.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-282'
Aug 24 05:08:05.437: INFO: stderr: ""
Aug 24 05:08:05.437: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 24 05:08:05.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-282'
Aug 24 05:08:06.554: INFO: stderr: ""
Aug 24 05:08:06.554: INFO: stdout: "update-demo-nautilus-gx8dx update-demo-nautilus-m6t2p "
Aug 24 05:08:06.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx8dx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-282'
Aug 24 05:08:07.668: INFO: stderr: ""
Aug 24 05:08:07.668: INFO: stdout: "true"
Aug 24 05:08:07.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gx8dx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-282'
Aug 24 05:08:08.790: INFO: stderr: ""
Aug 24 05:08:08.790: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 24 05:08:08.790: INFO: validating pod update-demo-nautilus-gx8dx
Aug 24 05:08:08.795: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 24 05:08:08.795: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 24 05:08:08.795: INFO: update-demo-nautilus-gx8dx is verified up and running
Aug 24 05:08:08.796: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m6t2p -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-282'
Aug 24 05:08:09.943: INFO: stderr: ""
Aug 24 05:08:09.943: INFO: stdout: "true"
Aug 24 05:08:09.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-m6t2p -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-282'
Aug 24 05:08:11.042: INFO: stderr: ""
Aug 24 05:08:11.042: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 24 05:08:11.042: INFO: validating pod update-demo-nautilus-m6t2p
Aug 24 05:08:11.048: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 24 05:08:11.049: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 24 05:08:11.049: INFO: update-demo-nautilus-m6t2p is verified up and running
STEP: using delete to clean up resources
Aug 24 05:08:11.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-282'
Aug 24 05:08:12.134: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 24 05:08:12.134: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 24 05:08:12.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-282'
Aug 24 05:08:13.329: INFO: stderr: "No resources found.\n"
Aug 24 05:08:13.329: INFO: stdout: ""
Aug 24 05:08:13.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-282 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 24 05:08:14.484: INFO: stderr: ""
Aug 24 05:08:14.484: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:08:14.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-282" for this suite.
Aug 24 05:08:36.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:08:36.689: INFO: namespace kubectl-282 deletion completed in 22.195185918s

• [SLOW TEST:59.336 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should scale a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:08:36.691: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 24 05:08:36.794: INFO: Waiting up to 5m0s for pod "pod-f5c3da25-ba8f-40b1-a946-51f9584d6d97" in namespace "emptydir-8479" to be "success or failure"
Aug 24 05:08:36.802: INFO: Pod "pod-f5c3da25-ba8f-40b1-a946-51f9584d6d97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.958256ms
Aug 24 05:08:38.809: INFO: Pod "pod-f5c3da25-ba8f-40b1-a946-51f9584d6d97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01468681s
Aug 24 05:08:40.817: INFO: Pod "pod-f5c3da25-ba8f-40b1-a946-51f9584d6d97": Phase="Running", Reason="", readiness=true. Elapsed: 4.022045093s
Aug 24 05:08:42.824: INFO: Pod "pod-f5c3da25-ba8f-40b1-a946-51f9584d6d97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029411619s
STEP: Saw pod success
Aug 24 05:08:42.824: INFO: Pod "pod-f5c3da25-ba8f-40b1-a946-51f9584d6d97" satisfied condition "success or failure"
Aug 24 05:08:42.829: INFO: Trying to get logs from node iruya-worker pod pod-f5c3da25-ba8f-40b1-a946-51f9584d6d97 container test-container: 
STEP: delete the pod
Aug 24 05:08:42.861: INFO: Waiting for pod pod-f5c3da25-ba8f-40b1-a946-51f9584d6d97 to disappear
Aug 24 05:08:42.865: INFO: Pod pod-f5c3da25-ba8f-40b1-a946-51f9584d6d97 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:08:42.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8479" for this suite.
Aug 24 05:08:48.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:08:49.089: INFO: namespace emptydir-8479 deletion completed in 6.215377033s

• [SLOW TEST:12.398 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:08:49.090: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Aug 24 05:08:49.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 24 05:08:50.419: INFO: stderr: ""
Aug 24 05:08:50.419: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:08:50.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-947" for this suite.
Aug 24 05:08:56.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:08:56.602: INFO: namespace kubectl-947 deletion completed in 6.170352018s

• [SLOW TEST:7.513 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:08:56.605: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 05:08:56.698: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8367495-7f60-42ff-838d-8280bc72115d" in namespace "projected-9015" to be "success or failure"
Aug 24 05:08:56.715: INFO: Pod "downwardapi-volume-c8367495-7f60-42ff-838d-8280bc72115d": Phase="Pending", Reason="", readiness=false. Elapsed: 17.295977ms
Aug 24 05:08:58.723: INFO: Pod "downwardapi-volume-c8367495-7f60-42ff-838d-8280bc72115d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02475853s
Aug 24 05:09:00.730: INFO: Pod "downwardapi-volume-c8367495-7f60-42ff-838d-8280bc72115d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03262662s
STEP: Saw pod success
Aug 24 05:09:00.731: INFO: Pod "downwardapi-volume-c8367495-7f60-42ff-838d-8280bc72115d" satisfied condition "success or failure"
Aug 24 05:09:00.737: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c8367495-7f60-42ff-838d-8280bc72115d container client-container: 
STEP: delete the pod
Aug 24 05:09:00.783: INFO: Waiting for pod downwardapi-volume-c8367495-7f60-42ff-838d-8280bc72115d to disappear
Aug 24 05:09:00.808: INFO: Pod downwardapi-volume-c8367495-7f60-42ff-838d-8280bc72115d no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:09:00.808: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9015" for this suite.
Aug 24 05:09:06.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:09:06.964: INFO: namespace projected-9015 deletion completed in 6.144287449s

• [SLOW TEST:10.360 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:09:06.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating replication controller my-hostname-basic-f3328f2f-fa06-4025-8415-c07b94b88b6e
Aug 24 05:09:07.083: INFO: Pod name my-hostname-basic-f3328f2f-fa06-4025-8415-c07b94b88b6e: Found 1 pods out of 1
Aug 24 05:09:07.084: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f3328f2f-fa06-4025-8415-c07b94b88b6e" are running
Aug 24 05:09:11.126: INFO: Pod "my-hostname-basic-f3328f2f-fa06-4025-8415-c07b94b88b6e-vc88v" is running (conditions: [])
Aug 24 05:09:11.127: INFO: Trying to dial the pod
Aug 24 05:09:16.146: INFO: Controller my-hostname-basic-f3328f2f-fa06-4025-8415-c07b94b88b6e: Got expected result from replica 1 [my-hostname-basic-f3328f2f-fa06-4025-8415-c07b94b88b6e-vc88v]: "my-hostname-basic-f3328f2f-fa06-4025-8415-c07b94b88b6e-vc88v", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:09:16.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5507" for this suite.
Aug 24 05:09:22.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:09:22.366: INFO: namespace replication-controller-5507 deletion completed in 6.208631618s

• [SLOW TEST:15.399 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:09:22.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 05:09:22.450: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2916fdc-85fe-42b4-9c70-92ccbaa2ed63" in namespace "projected-9447" to be "success or failure"
Aug 24 05:09:22.510: INFO: Pod "downwardapi-volume-d2916fdc-85fe-42b4-9c70-92ccbaa2ed63": Phase="Pending", Reason="", readiness=false. Elapsed: 59.817989ms
Aug 24 05:09:24.601: INFO: Pod "downwardapi-volume-d2916fdc-85fe-42b4-9c70-92ccbaa2ed63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1510984s
Aug 24 05:09:26.609: INFO: Pod "downwardapi-volume-d2916fdc-85fe-42b4-9c70-92ccbaa2ed63": Phase="Pending", Reason="", readiness=false. Elapsed: 4.158992718s
Aug 24 05:09:28.617: INFO: Pod "downwardapi-volume-d2916fdc-85fe-42b4-9c70-92ccbaa2ed63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.167186433s
STEP: Saw pod success
Aug 24 05:09:28.618: INFO: Pod "downwardapi-volume-d2916fdc-85fe-42b4-9c70-92ccbaa2ed63" satisfied condition "success or failure"
Aug 24 05:09:28.623: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d2916fdc-85fe-42b4-9c70-92ccbaa2ed63 container client-container: 
STEP: delete the pod
Aug 24 05:09:28.666: INFO: Waiting for pod downwardapi-volume-d2916fdc-85fe-42b4-9c70-92ccbaa2ed63 to disappear
Aug 24 05:09:28.679: INFO: Pod downwardapi-volume-d2916fdc-85fe-42b4-9c70-92ccbaa2ed63 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:09:28.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9447" for this suite.
Aug 24 05:09:34.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:09:34.848: INFO: namespace projected-9447 deletion completed in 6.15831343s

• [SLOW TEST:12.481 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:09:34.850: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Aug 24 05:09:34.911: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:09:35.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4457" for this suite.
Aug 24 05:09:42.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:09:42.155: INFO: namespace kubectl-4457 deletion completed in 6.158349983s

• [SLOW TEST:7.305 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:09:42.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 05:09:42.325: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Aug 24 05:09:42.338: INFO: Number of nodes with available pods: 0
Aug 24 05:09:42.338: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Aug 24 05:09:42.401: INFO: Number of nodes with available pods: 0
Aug 24 05:09:42.401: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:09:43.409: INFO: Number of nodes with available pods: 0
Aug 24 05:09:43.409: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:09:44.565: INFO: Number of nodes with available pods: 0
Aug 24 05:09:44.565: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:09:45.424: INFO: Number of nodes with available pods: 0
Aug 24 05:09:45.424: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:09:46.409: INFO: Number of nodes with available pods: 1
Aug 24 05:09:46.409: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Aug 24 05:09:46.479: INFO: Number of nodes with available pods: 1
Aug 24 05:09:46.479: INFO: Number of running nodes: 0, number of available pods: 1
Aug 24 05:09:47.487: INFO: Number of nodes with available pods: 0
Aug 24 05:09:47.487: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Aug 24 05:09:47.540: INFO: Number of nodes with available pods: 0
Aug 24 05:09:47.540: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:09:48.601: INFO: Number of nodes with available pods: 0
Aug 24 05:09:48.601: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:09:49.546: INFO: Number of nodes with available pods: 0
Aug 24 05:09:49.546: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:09:50.548: INFO: Number of nodes with available pods: 0
Aug 24 05:09:50.548: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:09:51.547: INFO: Number of nodes with available pods: 0
Aug 24 05:09:51.547: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:09:52.548: INFO: Number of nodes with available pods: 0
Aug 24 05:09:52.549: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:09:53.547: INFO: Number of nodes with available pods: 0
Aug 24 05:09:53.547: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:09:54.547: INFO: Number of nodes with available pods: 0
Aug 24 05:09:54.547: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:09:55.697: INFO: Number of nodes with available pods: 0
Aug 24 05:09:55.697: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:09:56.548: INFO: Number of nodes with available pods: 0
Aug 24 05:09:56.548: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:09:57.548: INFO: Number of nodes with available pods: 1
Aug 24 05:09:57.548: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4118, will wait for the garbage collector to delete the pods
Aug 24 05:09:57.620: INFO: Deleting DaemonSet.extensions daemon-set took: 8.95732ms
Aug 24 05:09:57.921: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.99647ms
Aug 24 05:10:13.348: INFO: Number of nodes with available pods: 0
Aug 24 05:10:13.348: INFO: Number of running nodes: 0, number of available pods: 0
Aug 24 05:10:13.353: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4118/daemonsets","resourceVersion":"2295704"},"items":null}

Aug 24 05:10:13.358: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4118/pods","resourceVersion":"2295704"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:10:13.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4118" for this suite.
Aug 24 05:10:19.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:10:19.594: INFO: namespace daemonsets-4118 deletion completed in 6.177653235s

• [SLOW TEST:37.437 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:10:19.600: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9422
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 24 05:10:19.648: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 24 05:10:47.827: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.28 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9422 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 05:10:47.828: INFO: >>> kubeConfig: /root/.kube/config
I0824 05:10:47.937484       7 log.go:172] (0x910b570) (0x910b730) Create stream
I0824 05:10:47.937658       7 log.go:172] (0x910b570) (0x910b730) Stream added, broadcasting: 1
I0824 05:10:47.942268       7 log.go:172] (0x910b570) Reply frame received for 1
I0824 05:10:47.942584       7 log.go:172] (0x910b570) (0x9da3180) Create stream
I0824 05:10:47.942720       7 log.go:172] (0x910b570) (0x9da3180) Stream added, broadcasting: 3
I0824 05:10:47.945304       7 log.go:172] (0x910b570) Reply frame received for 3
I0824 05:10:47.945623       7 log.go:172] (0x910b570) (0x910b8f0) Create stream
I0824 05:10:47.945809       7 log.go:172] (0x910b570) (0x910b8f0) Stream added, broadcasting: 5
I0824 05:10:47.947861       7 log.go:172] (0x910b570) Reply frame received for 5
I0824 05:10:49.051040       7 log.go:172] (0x910b570) Data frame received for 3
I0824 05:10:49.051331       7 log.go:172] (0x9da3180) (3) Data frame handling
I0824 05:10:49.051535       7 log.go:172] (0x910b570) Data frame received for 5
I0824 05:10:49.051775       7 log.go:172] (0x910b8f0) (5) Data frame handling
I0824 05:10:49.051992       7 log.go:172] (0x9da3180) (3) Data frame sent
I0824 05:10:49.052212       7 log.go:172] (0x910b570) Data frame received for 3
I0824 05:10:49.052357       7 log.go:172] (0x9da3180) (3) Data frame handling
I0824 05:10:49.053237       7 log.go:172] (0x910b570) Data frame received for 1
I0824 05:10:49.053442       7 log.go:172] (0x910b730) (1) Data frame handling
I0824 05:10:49.053671       7 log.go:172] (0x910b730) (1) Data frame sent
I0824 05:10:49.053915       7 log.go:172] (0x910b570) (0x910b730) Stream removed, broadcasting: 1
I0824 05:10:49.054158       7 log.go:172] (0x910b570) Go away received
I0824 05:10:49.054694       7 log.go:172] (0x910b570) (0x910b730) Stream removed, broadcasting: 1
I0824 05:10:49.054879       7 log.go:172] (0x910b570) (0x9da3180) Stream removed, broadcasting: 3
I0824 05:10:49.055040       7 log.go:172] (0x910b570) (0x910b8f0) Stream removed, broadcasting: 5
Aug 24 05:10:49.055: INFO: Found all expected endpoints: [netserver-0]
Aug 24 05:10:49.060: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.242 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9422 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 05:10:49.061: INFO: >>> kubeConfig: /root/.kube/config
I0824 05:10:49.161236       7 log.go:172] (0x966a930) (0x966abd0) Create stream
I0824 05:10:49.161412       7 log.go:172] (0x966a930) (0x966abd0) Stream added, broadcasting: 1
I0824 05:10:49.164884       7 log.go:172] (0x966a930) Reply frame received for 1
I0824 05:10:49.165104       7 log.go:172] (0x966a930) (0x910bab0) Create stream
I0824 05:10:49.165220       7 log.go:172] (0x966a930) (0x910bab0) Stream added, broadcasting: 3
I0824 05:10:49.166923       7 log.go:172] (0x966a930) Reply frame received for 3
I0824 05:10:49.167095       7 log.go:172] (0x966a930) (0x9da3260) Create stream
I0824 05:10:49.167203       7 log.go:172] (0x966a930) (0x9da3260) Stream added, broadcasting: 5
I0824 05:10:49.168474       7 log.go:172] (0x966a930) Reply frame received for 5
I0824 05:10:50.254532       7 log.go:172] (0x966a930) Data frame received for 3
I0824 05:10:50.254834       7 log.go:172] (0x910bab0) (3) Data frame handling
I0824 05:10:50.255015       7 log.go:172] (0x966a930) Data frame received for 5
I0824 05:10:50.255219       7 log.go:172] (0x9da3260) (5) Data frame handling
I0824 05:10:50.255510       7 log.go:172] (0x910bab0) (3) Data frame sent
I0824 05:10:50.255675       7 log.go:172] (0x966a930) Data frame received for 3
I0824 05:10:50.255789       7 log.go:172] (0x910bab0) (3) Data frame handling
I0824 05:10:50.256096       7 log.go:172] (0x966a930) Data frame received for 1
I0824 05:10:50.256224       7 log.go:172] (0x966abd0) (1) Data frame handling
I0824 05:10:50.256397       7 log.go:172] (0x966abd0) (1) Data frame sent
I0824 05:10:50.256538       7 log.go:172] (0x966a930) (0x966abd0) Stream removed, broadcasting: 1
I0824 05:10:50.256832       7 log.go:172] (0x966a930) Go away received
I0824 05:10:50.257314       7 log.go:172] (0x966a930) (0x966abd0) Stream removed, broadcasting: 1
I0824 05:10:50.257483       7 log.go:172] (0x966a930) (0x910bab0) Stream removed, broadcasting: 3
I0824 05:10:50.257612       7 log.go:172] (0x966a930) (0x9da3260) Stream removed, broadcasting: 5
Aug 24 05:10:50.257: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:10:50.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9422" for this suite.
Aug 24 05:11:14.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:11:14.493: INFO: namespace pod-network-test-9422 deletion completed in 24.223699501s

• [SLOW TEST:54.893 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:11:14.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Aug 24 05:11:14.555: INFO: Waiting up to 5m0s for pod "pod-c4a3a97c-52b6-4f87-86df-2ce5769cc26d" in namespace "emptydir-2732" to be "success or failure"
Aug 24 05:11:14.570: INFO: Pod "pod-c4a3a97c-52b6-4f87-86df-2ce5769cc26d": Phase="Pending", Reason="", readiness=false. Elapsed: 15.511528ms
Aug 24 05:11:16.586: INFO: Pod "pod-c4a3a97c-52b6-4f87-86df-2ce5769cc26d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030662718s
Aug 24 05:11:18.594: INFO: Pod "pod-c4a3a97c-52b6-4f87-86df-2ce5769cc26d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039091714s
STEP: Saw pod success
Aug 24 05:11:18.594: INFO: Pod "pod-c4a3a97c-52b6-4f87-86df-2ce5769cc26d" satisfied condition "success or failure"
Aug 24 05:11:18.637: INFO: Trying to get logs from node iruya-worker pod pod-c4a3a97c-52b6-4f87-86df-2ce5769cc26d container test-container: 
STEP: delete the pod
Aug 24 05:11:18.776: INFO: Waiting for pod pod-c4a3a97c-52b6-4f87-86df-2ce5769cc26d to disappear
Aug 24 05:11:18.818: INFO: Pod pod-c4a3a97c-52b6-4f87-86df-2ce5769cc26d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:11:18.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2732" for this suite.
Aug 24 05:11:24.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:11:25.014: INFO: namespace emptydir-2732 deletion completed in 6.18629899s

• [SLOW TEST:10.518 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:11:25.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Aug 24 05:11:25.170: INFO: Waiting up to 5m0s for pod "pod-ff0ed4df-4bde-438c-9eb8-cbb006549dfe" in namespace "emptydir-2267" to be "success or failure"
Aug 24 05:11:25.182: INFO: Pod "pod-ff0ed4df-4bde-438c-9eb8-cbb006549dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 11.207605ms
Aug 24 05:11:27.189: INFO: Pod "pod-ff0ed4df-4bde-438c-9eb8-cbb006549dfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018354179s
Aug 24 05:11:29.196: INFO: Pod "pod-ff0ed4df-4bde-438c-9eb8-cbb006549dfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02503217s
STEP: Saw pod success
Aug 24 05:11:29.196: INFO: Pod "pod-ff0ed4df-4bde-438c-9eb8-cbb006549dfe" satisfied condition "success or failure"
Aug 24 05:11:29.201: INFO: Trying to get logs from node iruya-worker2 pod pod-ff0ed4df-4bde-438c-9eb8-cbb006549dfe container test-container: 
STEP: delete the pod
Aug 24 05:11:29.218: INFO: Waiting for pod pod-ff0ed4df-4bde-438c-9eb8-cbb006549dfe to disappear
Aug 24 05:11:29.271: INFO: Pod pod-ff0ed4df-4bde-438c-9eb8-cbb006549dfe no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:11:29.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2267" for this suite.
Aug 24 05:11:35.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:11:35.446: INFO: namespace emptydir-2267 deletion completed in 6.162987312s

• [SLOW TEST:10.429 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:11:35.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 05:11:35.633: INFO: Create a RollingUpdate DaemonSet
Aug 24 05:11:35.639: INFO: Check that daemon pods launch on every node of the cluster
Aug 24 05:11:35.652: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:11:35.667: INFO: Number of nodes with available pods: 0
Aug 24 05:11:35.667: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:11:36.686: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:11:36.695: INFO: Number of nodes with available pods: 0
Aug 24 05:11:36.696: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:11:37.897: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:11:37.903: INFO: Number of nodes with available pods: 0
Aug 24 05:11:37.903: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:11:38.677: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:11:38.689: INFO: Number of nodes with available pods: 0
Aug 24 05:11:38.689: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:11:39.690: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:11:39.697: INFO: Number of nodes with available pods: 1
Aug 24 05:11:39.697: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 24 05:11:40.694: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:11:40.703: INFO: Number of nodes with available pods: 2
Aug 24 05:11:40.703: INFO: Number of running nodes: 2, number of available pods: 2
Aug 24 05:11:40.704: INFO: Update the DaemonSet to trigger a rollout
Aug 24 05:11:40.711: INFO: Updating DaemonSet daemon-set
Aug 24 05:11:46.746: INFO: Roll back the DaemonSet before rollout is complete
Aug 24 05:11:46.755: INFO: Updating DaemonSet daemon-set
Aug 24 05:11:46.756: INFO: Make sure DaemonSet rollback is complete
Aug 24 05:11:46.793: INFO: Wrong image for pod: daemon-set-wrz9b. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 24 05:11:46.793: INFO: Pod daemon-set-wrz9b is not available
Aug 24 05:11:46.822: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:11:47.831: INFO: Wrong image for pod: daemon-set-wrz9b. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent.
Aug 24 05:11:47.832: INFO: Pod daemon-set-wrz9b is not available
Aug 24 05:11:47.839: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:11:48.831: INFO: Pod daemon-set-px9bv is not available
Aug 24 05:11:48.842: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7394, will wait for the garbage collector to delete the pods
Aug 24 05:11:48.915: INFO: Deleting DaemonSet.extensions daemon-set took: 8.317677ms
Aug 24 05:11:49.216: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.847864ms
Aug 24 05:12:03.422: INFO: Number of nodes with available pods: 0
Aug 24 05:12:03.422: INFO: Number of running nodes: 0, number of available pods: 0
Aug 24 05:12:03.426: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7394/daemonsets","resourceVersion":"2296123"},"items":null}

Aug 24 05:12:03.430: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7394/pods","resourceVersion":"2296123"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:12:03.454: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7394" for this suite.
Aug 24 05:12:09.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:12:09.635: INFO: namespace daemonsets-7394 deletion completed in 6.172125874s

• [SLOW TEST:34.187 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:12:09.638: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-ca8d314f-afee-4446-8422-4ef305cf1c5e
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:12:09.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1693" for this suite.
Aug 24 05:12:15.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:12:15.908: INFO: namespace secrets-1693 deletion completed in 6.159760716s

• [SLOW TEST:6.270 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:12:15.909: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 05:12:16.129: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5eb8db26-d08c-40b9-b7fb-f1c6c9d23b80", Controller:(*bool)(0x8d8c8ea), BlockOwnerDeletion:(*bool)(0x8d8c8eb)}}
Aug 24 05:12:16.150: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"655c8061-4718-4ec0-a051-569e7963d69e", Controller:(*bool)(0x96bf732), BlockOwnerDeletion:(*bool)(0x96bf733)}}
Aug 24 05:12:16.204: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"811e1357-98b9-4463-916e-62a209bf4072", Controller:(*bool)(0x96bf9ca), BlockOwnerDeletion:(*bool)(0x96bf9cb)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:12:21.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1341" for this suite.
Aug 24 05:12:27.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:12:27.435: INFO: namespace gc-1341 deletion completed in 6.156603222s

• [SLOW TEST:11.526 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:12:27.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Aug 24 05:12:34.591: INFO: 0 pods remaining
Aug 24 05:12:34.591: INFO: 0 pods has nil DeletionTimestamp
Aug 24 05:12:34.591: INFO: 
Aug 24 05:12:35.025: INFO: 0 pods remaining
Aug 24 05:12:35.025: INFO: 0 pods has nil DeletionTimestamp
Aug 24 05:12:35.026: INFO: 
STEP: Gathering metrics
W0824 05:12:35.968095       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 24 05:12:35.968: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:12:35.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9161" for this suite.
Aug 24 05:12:42.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:12:42.397: INFO: namespace gc-9161 deletion completed in 6.223697684s

• [SLOW TEST:14.960 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:12:42.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 24 05:12:42.467: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 24 05:12:42.502: INFO: Waiting for terminating namespaces to be deleted...
Aug 24 05:12:42.506: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 24 05:12:42.518: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 24 05:12:42.518: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 24 05:12:42.518: INFO: daemon-set-qwbvn from daemonsets-4407 started at 2020-08-24 03:43:04 +0000 UTC (1 container statuses recorded)
Aug 24 05:12:42.518: INFO: 	Container app ready: true, restart count 0
Aug 24 05:12:42.518: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 24 05:12:42.518: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 24 05:12:42.519: INFO: daemon-set-2gkvj from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded)
Aug 24 05:12:42.519: INFO: 	Container app ready: true, restart count 0
Aug 24 05:12:42.519: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 24 05:12:42.533: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 24 05:12:42.533: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 24 05:12:42.533: INFO: daemon-set-hlzh5 from daemonsets-205 started at 2020-08-22 15:09:24 +0000 UTC (1 container statuses recorded)
Aug 24 05:12:42.533: INFO: 	Container app ready: true, restart count 0
Aug 24 05:12:42.533: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 24 05:12:42.533: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 24 05:12:42.533: INFO: daemon-set-nk8hf from daemonsets-4407 started at 2020-08-24 03:43:05 +0000 UTC (1 container statuses recorded)
Aug 24 05:12:42.533: INFO: 	Container app ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162e1c3c2e5ddc51], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:12:43.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4633" for this suite.
Aug 24 05:12:49.609: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:12:49.750: INFO: namespace sched-pred-4633 deletion completed in 6.164648969s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.347 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:12:49.752: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 24 05:12:49.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-943'
Aug 24 05:12:51.038: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 24 05:12:51.038: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Aug 24 05:12:53.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-943'
Aug 24 05:12:54.323: INFO: stderr: ""
Aug 24 05:12:54.323: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:12:54.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-943" for this suite.
Aug 24 05:14:17.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:14:17.995: INFO: namespace kubectl-943 deletion completed in 1m23.663601595s

• [SLOW TEST:88.243 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:14:17.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 05:14:20.142: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 05:14:26.490: INFO: Creating ReplicaSet my-hostname-basic-f4d4c29a-9859-4642-87d7-4dabd76cc466
Aug 24 05:14:26.512: INFO: Pod name my-hostname-basic-f4d4c29a-9859-4642-87d7-4dabd76cc466: Found 0 pods out of 1
Aug 24 05:14:31.521: INFO: Pod name my-hostname-basic-f4d4c29a-9859-4642-87d7-4dabd76cc466: Found 1 pods out of 1
Aug 24 05:14:31.521: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-f4d4c29a-9859-4642-87d7-4dabd76cc466" is running
Aug 24 05:14:31.528: INFO: Pod "my-hostname-basic-f4d4c29a-9859-4642-87d7-4dabd76cc466-lzmbq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-24 05:14:26 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-24 05:14:29 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-24 05:14:29 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-24 05:14:26 +0000 UTC Reason: Message:}])
Aug 24 05:14:31.529: INFO: Trying to dial the pod
Aug 24 05:14:36.547: INFO: Controller my-hostname-basic-f4d4c29a-9859-4642-87d7-4dabd76cc466: Got expected result from replica 1 [my-hostname-basic-f4d4c29a-9859-4642-87d7-4dabd76cc466-lzmbq]: "my-hostname-basic-f4d4c29a-9859-4642-87d7-4dabd76cc466-lzmbq", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:14:36.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9682" for this suite.
Aug 24 05:14:42.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:14:42.756: INFO: namespace replicaset-9682 deletion completed in 6.199711807s

• [SLOW TEST:16.320 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:14:42.757: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-6f53ec96-6d9b-44fe-ad34-4ae281af2587
STEP: Creating a pod to test consume secrets
Aug 24 05:14:42.870: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7e90da25-763d-40df-ab8c-f85a5858f763" in namespace "projected-526" to be "success or failure"
Aug 24 05:14:42.875: INFO: Pod "pod-projected-secrets-7e90da25-763d-40df-ab8c-f85a5858f763": Phase="Pending", Reason="", readiness=false. Elapsed: 4.139727ms
Aug 24 05:14:45.054: INFO: Pod "pod-projected-secrets-7e90da25-763d-40df-ab8c-f85a5858f763": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183171687s
Aug 24 05:14:47.061: INFO: Pod "pod-projected-secrets-7e90da25-763d-40df-ab8c-f85a5858f763": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.19073589s
STEP: Saw pod success
Aug 24 05:14:47.062: INFO: Pod "pod-projected-secrets-7e90da25-763d-40df-ab8c-f85a5858f763" satisfied condition "success or failure"
Aug 24 05:14:47.112: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-7e90da25-763d-40df-ab8c-f85a5858f763 container projected-secret-volume-test: 
STEP: delete the pod
Aug 24 05:14:47.139: INFO: Waiting for pod pod-projected-secrets-7e90da25-763d-40df-ab8c-f85a5858f763 to disappear
Aug 24 05:14:47.144: INFO: Pod pod-projected-secrets-7e90da25-763d-40df-ab8c-f85a5858f763 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:14:47.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-526" for this suite.
Aug 24 05:14:53.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:14:53.314: INFO: namespace projected-526 deletion completed in 6.160364669s

• [SLOW TEST:10.557 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:14:53.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 24 05:14:53.521: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:14:53.564: INFO: Number of nodes with available pods: 0
Aug 24 05:14:53.565: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:14:54.576: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:14:54.582: INFO: Number of nodes with available pods: 0
Aug 24 05:14:54.582: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:14:55.575: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:14:55.581: INFO: Number of nodes with available pods: 0
Aug 24 05:14:55.581: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:14:56.587: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:14:56.595: INFO: Number of nodes with available pods: 0
Aug 24 05:14:56.595: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:14:57.577: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:14:57.582: INFO: Number of nodes with available pods: 1
Aug 24 05:14:57.582: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 24 05:14:58.578: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:14:58.584: INFO: Number of nodes with available pods: 1
Aug 24 05:14:58.584: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 24 05:14:59.578: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:14:59.586: INFO: Number of nodes with available pods: 2
Aug 24 05:14:59.586: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Aug 24 05:14:59.680: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:14:59.693: INFO: Number of nodes with available pods: 1
Aug 24 05:14:59.693: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 24 05:15:00.704: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:15:00.710: INFO: Number of nodes with available pods: 1
Aug 24 05:15:00.710: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 24 05:15:01.707: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:15:01.712: INFO: Number of nodes with available pods: 1
Aug 24 05:15:01.712: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 24 05:15:02.708: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:15:02.716: INFO: Number of nodes with available pods: 1
Aug 24 05:15:02.716: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 24 05:15:03.708: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:15:03.714: INFO: Number of nodes with available pods: 2
Aug 24 05:15:03.714: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4591, will wait for the garbage collector to delete the pods
Aug 24 05:15:03.784: INFO: Deleting DaemonSet.extensions daemon-set took: 7.760559ms
Aug 24 05:15:04.085: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.131987ms
Aug 24 05:15:13.391: INFO: Number of nodes with available pods: 0
Aug 24 05:15:13.391: INFO: Number of running nodes: 0, number of available pods: 0
Aug 24 05:15:13.395: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4591/daemonsets","resourceVersion":"2296902"},"items":null}

Aug 24 05:15:13.399: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4591/pods","resourceVersion":"2296902"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:15:13.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4591" for this suite.
Aug 24 05:15:19.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:15:19.594: INFO: namespace daemonsets-4591 deletion completed in 6.163071453s

• [SLOW TEST:26.280 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:15:19.595: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:15:23.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5248" for this suite.
Aug 24 05:16:03.828: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:16:04.004: INFO: namespace kubelet-test-5248 deletion completed in 40.197358794s

• [SLOW TEST:44.409 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:16:04.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 24 05:16:04.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-9728'
Aug 24 05:16:09.551: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 24 05:16:09.551: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Aug 24 05:16:09.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-9728'
Aug 24 05:16:10.744: INFO: stderr: ""
Aug 24 05:16:10.745: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:16:10.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9728" for this suite.
Aug 24 05:16:16.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:16:17.000: INFO: namespace kubectl-9728 deletion completed in 6.244553928s

• [SLOW TEST:12.995 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:16:17.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Aug 24 05:16:17.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4636'
Aug 24 05:16:18.606: INFO: stderr: ""
Aug 24 05:16:18.606: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 24 05:16:18.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4636'
Aug 24 05:16:19.726: INFO: stderr: ""
Aug 24 05:16:19.726: INFO: stdout: "update-demo-nautilus-8gffk update-demo-nautilus-l2wgm "
Aug 24 05:16:19.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8gffk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4636'
Aug 24 05:16:20.846: INFO: stderr: ""
Aug 24 05:16:20.846: INFO: stdout: ""
Aug 24 05:16:20.846: INFO: update-demo-nautilus-8gffk is created but not running
Aug 24 05:16:25.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4636'
Aug 24 05:16:27.024: INFO: stderr: ""
Aug 24 05:16:27.024: INFO: stdout: "update-demo-nautilus-8gffk update-demo-nautilus-l2wgm "
Aug 24 05:16:27.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8gffk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4636'
Aug 24 05:16:28.157: INFO: stderr: ""
Aug 24 05:16:28.158: INFO: stdout: "true"
Aug 24 05:16:28.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8gffk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4636'
Aug 24 05:16:29.314: INFO: stderr: ""
Aug 24 05:16:29.314: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 24 05:16:29.314: INFO: validating pod update-demo-nautilus-8gffk
Aug 24 05:16:29.321: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 24 05:16:29.321: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 24 05:16:29.322: INFO: update-demo-nautilus-8gffk is verified up and running
Aug 24 05:16:29.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l2wgm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4636'
Aug 24 05:16:30.500: INFO: stderr: ""
Aug 24 05:16:30.500: INFO: stdout: "true"
Aug 24 05:16:30.500: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l2wgm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4636'
Aug 24 05:16:31.665: INFO: stderr: ""
Aug 24 05:16:31.665: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 24 05:16:31.665: INFO: validating pod update-demo-nautilus-l2wgm
Aug 24 05:16:31.670: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 24 05:16:31.671: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 24 05:16:31.671: INFO: update-demo-nautilus-l2wgm is verified up and running
STEP: rolling-update to new replication controller
Aug 24 05:16:31.677: INFO: scanned /root for discovery docs: 
Aug 24 05:16:31.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4636'
Aug 24 05:16:56.612: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 24 05:16:56.612: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 24 05:16:56.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4636'
Aug 24 05:16:57.803: INFO: stderr: ""
Aug 24 05:16:57.803: INFO: stdout: "update-demo-kitten-44t9k update-demo-kitten-qnfdf "
Aug 24 05:16:57.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-44t9k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4636'
Aug 24 05:16:58.950: INFO: stderr: ""
Aug 24 05:16:58.950: INFO: stdout: "true"
Aug 24 05:16:58.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-44t9k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4636'
Aug 24 05:17:00.075: INFO: stderr: ""
Aug 24 05:17:00.075: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 24 05:17:00.076: INFO: validating pod update-demo-kitten-44t9k
Aug 24 05:17:00.082: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 24 05:17:00.082: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 24 05:17:00.082: INFO: update-demo-kitten-44t9k is verified up and running
Aug 24 05:17:00.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qnfdf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4636'
Aug 24 05:17:01.195: INFO: stderr: ""
Aug 24 05:17:01.195: INFO: stdout: "true"
Aug 24 05:17:01.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-qnfdf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4636'
Aug 24 05:17:02.334: INFO: stderr: ""
Aug 24 05:17:02.335: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Aug 24 05:17:02.335: INFO: validating pod update-demo-kitten-qnfdf
Aug 24 05:17:02.341: INFO: got data: {
  "image": "kitten.jpg"
}

Aug 24 05:17:02.341: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Aug 24 05:17:02.341: INFO: update-demo-kitten-qnfdf is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:17:02.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4636" for this suite.
Aug 24 05:17:24.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:17:24.531: INFO: namespace kubectl-4636 deletion completed in 22.181105159s

• [SLOW TEST:67.530 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:17:24.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-651d0672-7016-408b-9798-460d02c1dc31
STEP: Creating a pod to test consume secrets
Aug 24 05:17:24.682: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5d92df27-ff11-45f9-a443-0eeed99c8b21" in namespace "projected-8408" to be "success or failure"
Aug 24 05:17:24.702: INFO: Pod "pod-projected-secrets-5d92df27-ff11-45f9-a443-0eeed99c8b21": Phase="Pending", Reason="", readiness=false. Elapsed: 19.65538ms
Aug 24 05:17:26.800: INFO: Pod "pod-projected-secrets-5d92df27-ff11-45f9-a443-0eeed99c8b21": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118017346s
Aug 24 05:17:28.808: INFO: Pod "pod-projected-secrets-5d92df27-ff11-45f9-a443-0eeed99c8b21": Phase="Running", Reason="", readiness=true. Elapsed: 4.125427342s
Aug 24 05:17:30.815: INFO: Pod "pod-projected-secrets-5d92df27-ff11-45f9-a443-0eeed99c8b21": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.133094831s
STEP: Saw pod success
Aug 24 05:17:30.816: INFO: Pod "pod-projected-secrets-5d92df27-ff11-45f9-a443-0eeed99c8b21" satisfied condition "success or failure"
Aug 24 05:17:30.821: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-5d92df27-ff11-45f9-a443-0eeed99c8b21 container secret-volume-test: 
STEP: delete the pod
Aug 24 05:17:32.049: INFO: Waiting for pod pod-projected-secrets-5d92df27-ff11-45f9-a443-0eeed99c8b21 to disappear
Aug 24 05:17:32.187: INFO: Pod pod-projected-secrets-5d92df27-ff11-45f9-a443-0eeed99c8b21 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:17:32.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8408" for this suite.
Aug 24 05:17:38.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:17:38.396: INFO: namespace projected-8408 deletion completed in 6.198168454s

• [SLOW TEST:13.864 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:17:38.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-e4e3b129-e2a0-4bb9-bc83-1dc2c05d3a4a in namespace container-probe-6977
Aug 24 05:17:42.559: INFO: Started pod test-webserver-e4e3b129-e2a0-4bb9-bc83-1dc2c05d3a4a in namespace container-probe-6977
STEP: checking the pod's current state and verifying that restartCount is present
Aug 24 05:17:42.564: INFO: Initial restart count of pod test-webserver-e4e3b129-e2a0-4bb9-bc83-1dc2c05d3a4a is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:21:43.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6977" for this suite.
Aug 24 05:21:50.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:21:50.159: INFO: namespace container-probe-6977 deletion completed in 6.218049877s

• [SLOW TEST:251.762 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:21:50.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-f2a48224-aaa2-456e-a92f-7e1807a55327
STEP: Creating secret with name s-test-opt-upd-1d436e7c-69f4-4580-abd1-03a1520416f0
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-f2a48224-aaa2-456e-a92f-7e1807a55327
STEP: Updating secret s-test-opt-upd-1d436e7c-69f4-4580-abd1-03a1520416f0
STEP: Creating secret with name s-test-opt-create-e0f32613-0a9a-4fd1-8865-0bbe374fe04f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:21:58.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7975" for this suite.
Aug 24 05:22:22.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:22:22.624: INFO: namespace secrets-7975 deletion completed in 24.156745865s

• [SLOW TEST:32.461 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:22:22.626: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 24 05:22:22.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5181'
Aug 24 05:22:24.029: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 24 05:22:24.029: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: rolling-update to same image controller
Aug 24 05:22:24.071: INFO: scanned /root for discovery docs: 
Aug 24 05:22:24.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5181'
Aug 24 05:22:41.423: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Aug 24 05:22:41.423: INFO: stdout: "Created e2e-test-nginx-rc-064c4920e659c153578c948c09e3d1e8\nScaling up e2e-test-nginx-rc-064c4920e659c153578c948c09e3d1e8 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-064c4920e659c153578c948c09e3d1e8 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-064c4920e659c153578c948c09e3d1e8 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Aug 24 05:22:41.424: INFO: stdout: "Created e2e-test-nginx-rc-064c4920e659c153578c948c09e3d1e8\nScaling up e2e-test-nginx-rc-064c4920e659c153578c948c09e3d1e8 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-064c4920e659c153578c948c09e3d1e8 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-064c4920e659c153578c948c09e3d1e8 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Aug 24 05:22:41.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5181'
Aug 24 05:22:42.635: INFO: stderr: ""
Aug 24 05:22:42.635: INFO: stdout: "e2e-test-nginx-rc-064c4920e659c153578c948c09e3d1e8-9m6r2 e2e-test-nginx-rc-xcc4m "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Aug 24 05:22:47.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5181'
Aug 24 05:22:48.789: INFO: stderr: ""
Aug 24 05:22:48.789: INFO: stdout: "e2e-test-nginx-rc-064c4920e659c153578c948c09e3d1e8-9m6r2 "
Aug 24 05:22:48.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-064c4920e659c153578c948c09e3d1e8-9m6r2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5181'
Aug 24 05:22:49.880: INFO: stderr: ""
Aug 24 05:22:49.880: INFO: stdout: "true"
Aug 24 05:22:49.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-064c4920e659c153578c948c09e3d1e8-9m6r2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5181'
Aug 24 05:22:51.073: INFO: stderr: ""
Aug 24 05:22:51.073: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Aug 24 05:22:51.073: INFO: e2e-test-nginx-rc-064c4920e659c153578c948c09e3d1e8-9m6r2 is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Aug 24 05:22:51.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5181'
Aug 24 05:22:52.200: INFO: stderr: ""
Aug 24 05:22:52.201: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:22:52.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5181" for this suite.
Aug 24 05:23:14.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:23:14.388: INFO: namespace kubectl-5181 deletion completed in 22.179188956s

• [SLOW TEST:51.763 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:23:14.393: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-707.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-707.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-707.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-707.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-707.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-707.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-707.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-707.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-707.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-707.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-707.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 170.96.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.96.170_udp@PTR;check="$$(dig +tcp +noall +answer +search 170.96.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.96.170_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-707.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-707.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-707.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-707.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-707.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-707.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-707.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-707.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-707.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-707.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-707.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 170.96.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.96.170_udp@PTR;check="$$(dig +tcp +noall +answer +search 170.96.107.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.107.96.170_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 24 05:23:22.655: INFO: Unable to read wheezy_udp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:22.659: INFO: Unable to read wheezy_tcp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:22.663: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:22.666: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:22.695: INFO: Unable to read jessie_udp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:22.699: INFO: Unable to read jessie_tcp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:22.703: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:22.707: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:22.736: INFO: Lookups using dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e failed for: [wheezy_udp@dns-test-service.dns-707.svc.cluster.local wheezy_tcp@dns-test-service.dns-707.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local jessie_udp@dns-test-service.dns-707.svc.cluster.local jessie_tcp@dns-test-service.dns-707.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local]

Aug 24 05:23:27.768: INFO: Unable to read wheezy_udp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:27.774: INFO: Unable to read wheezy_tcp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:27.779: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:27.783: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:27.813: INFO: Unable to read jessie_udp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:27.816: INFO: Unable to read jessie_tcp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:27.820: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:27.824: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:27.849: INFO: Lookups using dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e failed for: [wheezy_udp@dns-test-service.dns-707.svc.cluster.local wheezy_tcp@dns-test-service.dns-707.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local jessie_udp@dns-test-service.dns-707.svc.cluster.local jessie_tcp@dns-test-service.dns-707.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local]

Aug 24 05:23:32.743: INFO: Unable to read wheezy_udp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:32.769: INFO: Unable to read wheezy_tcp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:32.797: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:32.801: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:32.833: INFO: Unable to read jessie_udp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:32.837: INFO: Unable to read jessie_tcp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:32.841: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:32.845: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:32.870: INFO: Lookups using dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e failed for: [wheezy_udp@dns-test-service.dns-707.svc.cluster.local wheezy_tcp@dns-test-service.dns-707.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local jessie_udp@dns-test-service.dns-707.svc.cluster.local jessie_tcp@dns-test-service.dns-707.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local]

Aug 24 05:23:37.743: INFO: Unable to read wheezy_udp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:37.748: INFO: Unable to read wheezy_tcp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:37.753: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:37.757: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:37.783: INFO: Unable to read jessie_udp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:37.787: INFO: Unable to read jessie_tcp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:37.791: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:37.795: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:37.824: INFO: Lookups using dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e failed for: [wheezy_udp@dns-test-service.dns-707.svc.cluster.local wheezy_tcp@dns-test-service.dns-707.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local jessie_udp@dns-test-service.dns-707.svc.cluster.local jessie_tcp@dns-test-service.dns-707.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local]

Aug 24 05:23:42.745: INFO: Unable to read wheezy_udp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:42.750: INFO: Unable to read wheezy_tcp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:42.755: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:42.759: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:42.801: INFO: Unable to read jessie_udp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:42.806: INFO: Unable to read jessie_tcp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:42.810: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:42.814: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:42.838: INFO: Lookups using dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e failed for: [wheezy_udp@dns-test-service.dns-707.svc.cluster.local wheezy_tcp@dns-test-service.dns-707.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local jessie_udp@dns-test-service.dns-707.svc.cluster.local jessie_tcp@dns-test-service.dns-707.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local]

Aug 24 05:23:47.743: INFO: Unable to read wheezy_udp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:47.749: INFO: Unable to read wheezy_tcp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:47.753: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:47.757: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:47.788: INFO: Unable to read jessie_udp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:47.793: INFO: Unable to read jessie_tcp@dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:47.797: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:47.802: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local from pod dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e: the server could not find the requested resource (get pods dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e)
Aug 24 05:23:47.850: INFO: Lookups using dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e failed for: [wheezy_udp@dns-test-service.dns-707.svc.cluster.local wheezy_tcp@dns-test-service.dns-707.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local jessie_udp@dns-test-service.dns-707.svc.cluster.local jessie_tcp@dns-test-service.dns-707.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-707.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-707.svc.cluster.local]

Aug 24 05:23:52.842: INFO: DNS probes using dns-707/dns-test-c720cc27-7ff2-401e-aed0-6cecedbf9b4e succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:23:52.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-707" for this suite.
Aug 24 05:23:59.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:23:59.503: INFO: namespace dns-707 deletion completed in 6.497386069s

• [SLOW TEST:45.111 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:23:59.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 05:23:59.628: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Aug 24 05:23:59.672: INFO: Pod name sample-pod: Found 0 pods out of 1
Aug 24 05:24:04.681: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 24 05:24:04.682: INFO: Creating deployment "test-rolling-update-deployment"
Aug 24 05:24:04.690: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Aug 24 05:24:04.704: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Aug 24 05:24:06.891: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Aug 24 05:24:06.897: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733843444, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733843444, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733843444, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733843444, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 24 05:24:08.903: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 24 05:24:08.929: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-4105,SelfLink:/apis/apps/v1/namespaces/deployment-4105/deployments/test-rolling-update-deployment,UID:96382fdd-2a3c-4d41-bfd9-a175bec9ea40,ResourceVersion:2298426,Generation:1,CreationTimestamp:2020-08-24 05:24:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-24 05:24:04 +0000 UTC 2020-08-24 05:24:04 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-24 05:24:08 +0000 UTC 2020-08-24 05:24:04 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 24 05:24:08.957: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-4105,SelfLink:/apis/apps/v1/namespaces/deployment-4105/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:862ee63d-f3a2-4178-98cf-228e3fdd56a1,ResourceVersion:2298415,Generation:1,CreationTimestamp:2020-08-24 05:24:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 96382fdd-2a3c-4d41-bfd9-a175bec9ea40 0x8d8cc97 0x8d8cc98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 24 05:24:08.957: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Aug 24 05:24:08.959: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-4105,SelfLink:/apis/apps/v1/namespaces/deployment-4105/replicasets/test-rolling-update-controller,UID:7479391d-dcaf-4fcb-a83f-c7035cd19b18,ResourceVersion:2298424,Generation:2,CreationTimestamp:2020-08-24 05:23:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 96382fdd-2a3c-4d41-bfd9-a175bec9ea40 0x8d8cbc7 0x8d8cbc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 24 05:24:08.968: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-k4wss" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-k4wss,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-4105,SelfLink:/api/v1/namespaces/deployment-4105/pods/test-rolling-update-deployment-79f6b9d75c-k4wss,UID:e52b0a97-683d-4456-bd5a-d0b80b9fc17c,ResourceVersion:2298414,Generation:0,CreationTimestamp:2020-08-24 05:24:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 862ee63d-f3a2-4178-98cf-228e3fdd56a1 0x8d8d5c7 0x8d8d5c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-7qwqh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-7qwqh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-7qwqh true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x8d8d640} {node.kubernetes.io/unreachable Exists  NoExecute 0x8d8d660}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:24:04 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:24:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:24:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:24:04 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.11,StartTime:2020-08-24 05:24:04 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-24 05:24:07 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://9c55adf0cf38731da9ddbdf150a365b40cf0e59e419f0dfba4e688ddb47addc1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:24:08.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4105" for this suite.
Aug 24 05:24:15.128: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:24:15.250: INFO: namespace deployment-4105 deletion completed in 6.272599908s

• [SLOW TEST:15.745 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:24:15.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-7412
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 24 05:24:15.303: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 24 05:24:43.505: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.13:8080/dial?request=hostName&protocol=udp&host=10.244.1.49&port=8081&tries=1'] Namespace:pod-network-test-7412 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 05:24:43.505: INFO: >>> kubeConfig: /root/.kube/config
I0824 05:24:43.615379       7 log.go:172] (0x8e9e540) (0x8e9e620) Create stream
I0824 05:24:43.615611       7 log.go:172] (0x8e9e540) (0x8e9e620) Stream added, broadcasting: 1
I0824 05:24:43.620360       7 log.go:172] (0x8e9e540) Reply frame received for 1
I0824 05:24:43.620526       7 log.go:172] (0x8e9e540) (0x910b730) Create stream
I0824 05:24:43.620611       7 log.go:172] (0x8e9e540) (0x910b730) Stream added, broadcasting: 3
I0824 05:24:43.622409       7 log.go:172] (0x8e9e540) Reply frame received for 3
I0824 05:24:43.622701       7 log.go:172] (0x8e9e540) (0x873e850) Create stream
I0824 05:24:43.622881       7 log.go:172] (0x8e9e540) (0x873e850) Stream added, broadcasting: 5
I0824 05:24:43.625155       7 log.go:172] (0x8e9e540) Reply frame received for 5
I0824 05:24:43.735276       7 log.go:172] (0x8e9e540) Data frame received for 3
I0824 05:24:43.735507       7 log.go:172] (0x8e9e540) Data frame received for 5
I0824 05:24:43.735737       7 log.go:172] (0x873e850) (5) Data frame handling
I0824 05:24:43.735956       7 log.go:172] (0x910b730) (3) Data frame handling
I0824 05:24:43.736064       7 log.go:172] (0x910b730) (3) Data frame sent
I0824 05:24:43.736138       7 log.go:172] (0x8e9e540) Data frame received for 3
I0824 05:24:43.736222       7 log.go:172] (0x910b730) (3) Data frame handling
I0824 05:24:43.737262       7 log.go:172] (0x8e9e540) Data frame received for 1
I0824 05:24:43.737461       7 log.go:172] (0x8e9e620) (1) Data frame handling
I0824 05:24:43.737687       7 log.go:172] (0x8e9e620) (1) Data frame sent
I0824 05:24:43.737937       7 log.go:172] (0x8e9e540) (0x8e9e620) Stream removed, broadcasting: 1
I0824 05:24:43.738200       7 log.go:172] (0x8e9e540) Go away received
I0824 05:24:43.738858       7 log.go:172] (0x8e9e540) (0x8e9e620) Stream removed, broadcasting: 1
I0824 05:24:43.739084       7 log.go:172] (0x8e9e540) (0x910b730) Stream removed, broadcasting: 3
I0824 05:24:43.739231       7 log.go:172] (0x8e9e540) (0x873e850) Stream removed, broadcasting: 5
Aug 24 05:24:43.739: INFO: Waiting for endpoints: map[]
Aug 24 05:24:43.744: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.13:8080/dial?request=hostName&protocol=udp&host=10.244.2.12&port=8081&tries=1'] Namespace:pod-network-test-7412 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 05:24:43.744: INFO: >>> kubeConfig: /root/.kube/config
I0824 05:24:43.845313       7 log.go:172] (0x8c13a40) (0x8c13c00) Create stream
I0824 05:24:43.845514       7 log.go:172] (0x8c13a40) (0x8c13c00) Stream added, broadcasting: 1
I0824 05:24:43.850580       7 log.go:172] (0x8c13a40) Reply frame received for 1
I0824 05:24:43.850828       7 log.go:172] (0x8c13a40) (0x92a6fc0) Create stream
I0824 05:24:43.850960       7 log.go:172] (0x8c13a40) (0x92a6fc0) Stream added, broadcasting: 3
I0824 05:24:43.852944       7 log.go:172] (0x8c13a40) Reply frame received for 3
I0824 05:24:43.853118       7 log.go:172] (0x8c13a40) (0x910bab0) Create stream
I0824 05:24:43.853209       7 log.go:172] (0x8c13a40) (0x910bab0) Stream added, broadcasting: 5
I0824 05:24:43.854405       7 log.go:172] (0x8c13a40) Reply frame received for 5
I0824 05:24:43.934319       7 log.go:172] (0x8c13a40) Data frame received for 3
I0824 05:24:43.934537       7 log.go:172] (0x92a6fc0) (3) Data frame handling
I0824 05:24:43.934686       7 log.go:172] (0x8c13a40) Data frame received for 5
I0824 05:24:43.934832       7 log.go:172] (0x910bab0) (5) Data frame handling
I0824 05:24:43.934923       7 log.go:172] (0x92a6fc0) (3) Data frame sent
I0824 05:24:43.935255       7 log.go:172] (0x8c13a40) Data frame received for 3
I0824 05:24:43.935575       7 log.go:172] (0x92a6fc0) (3) Data frame handling
I0824 05:24:43.935870       7 log.go:172] (0x8c13a40) Data frame received for 1
I0824 05:24:43.936022       7 log.go:172] (0x8c13c00) (1) Data frame handling
I0824 05:24:43.936180       7 log.go:172] (0x8c13c00) (1) Data frame sent
I0824 05:24:43.936323       7 log.go:172] (0x8c13a40) (0x8c13c00) Stream removed, broadcasting: 1
I0824 05:24:43.936505       7 log.go:172] (0x8c13a40) Go away received
I0824 05:24:43.936947       7 log.go:172] (0x8c13a40) (0x8c13c00) Stream removed, broadcasting: 1
I0824 05:24:43.937107       7 log.go:172] (0x8c13a40) (0x92a6fc0) Stream removed, broadcasting: 3
I0824 05:24:43.937232       7 log.go:172] (0x8c13a40) (0x910bab0) Stream removed, broadcasting: 5
Aug 24 05:24:43.937: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:24:43.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-7412" for this suite.
Aug 24 05:25:05.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:25:06.197: INFO: namespace pod-network-test-7412 deletion completed in 22.249451274s

• [SLOW TEST:50.945 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:25:06.199: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-5daa23e9-13f4-4fcd-a989-56bd2f800ac4
STEP: Creating a pod to test consume configMaps
Aug 24 05:25:06.322: INFO: Waiting up to 5m0s for pod "pod-configmaps-7adbb97a-c806-4d83-8d6f-10fd328a9801" in namespace "configmap-459" to be "success or failure"
Aug 24 05:25:06.351: INFO: Pod "pod-configmaps-7adbb97a-c806-4d83-8d6f-10fd328a9801": Phase="Pending", Reason="", readiness=false. Elapsed: 28.622455ms
Aug 24 05:25:08.800: INFO: Pod "pod-configmaps-7adbb97a-c806-4d83-8d6f-10fd328a9801": Phase="Pending", Reason="", readiness=false. Elapsed: 2.477462897s
Aug 24 05:25:10.806: INFO: Pod "pod-configmaps-7adbb97a-c806-4d83-8d6f-10fd328a9801": Phase="Running", Reason="", readiness=true. Elapsed: 4.484212637s
Aug 24 05:25:12.814: INFO: Pod "pod-configmaps-7adbb97a-c806-4d83-8d6f-10fd328a9801": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.49198265s
STEP: Saw pod success
Aug 24 05:25:12.815: INFO: Pod "pod-configmaps-7adbb97a-c806-4d83-8d6f-10fd328a9801" satisfied condition "success or failure"
Aug 24 05:25:12.830: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7adbb97a-c806-4d83-8d6f-10fd328a9801 container configmap-volume-test: 
STEP: delete the pod
Aug 24 05:25:12.871: INFO: Waiting for pod pod-configmaps-7adbb97a-c806-4d83-8d6f-10fd328a9801 to disappear
Aug 24 05:25:12.892: INFO: Pod pod-configmaps-7adbb97a-c806-4d83-8d6f-10fd328a9801 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:25:12.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-459" for this suite.
Aug 24 05:25:18.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:25:19.096: INFO: namespace configmap-459 deletion completed in 6.194051157s

• [SLOW TEST:12.897 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:25:19.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0824 05:25:19.920516       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 24 05:25:19.920: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:25:19.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9650" for this suite.
Aug 24 05:25:25.950: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:25:26.083: INFO: namespace gc-9650 deletion completed in 6.154663645s

• [SLOW TEST:6.986 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:25:26.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Aug 24 05:25:26.167: INFO: Waiting up to 5m0s for pod "client-containers-df4ff4a3-8600-469b-b98a-42b82affb8d0" in namespace "containers-3881" to be "success or failure"
Aug 24 05:25:26.190: INFO: Pod "client-containers-df4ff4a3-8600-469b-b98a-42b82affb8d0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.655593ms
Aug 24 05:25:28.353: INFO: Pod "client-containers-df4ff4a3-8600-469b-b98a-42b82affb8d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186011867s
Aug 24 05:25:30.387: INFO: Pod "client-containers-df4ff4a3-8600-469b-b98a-42b82affb8d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.219538906s
STEP: Saw pod success
Aug 24 05:25:30.387: INFO: Pod "client-containers-df4ff4a3-8600-469b-b98a-42b82affb8d0" satisfied condition "success or failure"
Aug 24 05:25:30.392: INFO: Trying to get logs from node iruya-worker2 pod client-containers-df4ff4a3-8600-469b-b98a-42b82affb8d0 container test-container: 
STEP: delete the pod
Aug 24 05:25:30.419: INFO: Waiting for pod client-containers-df4ff4a3-8600-469b-b98a-42b82affb8d0 to disappear
Aug 24 05:25:30.511: INFO: Pod client-containers-df4ff4a3-8600-469b-b98a-42b82affb8d0 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:25:30.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-3881" for this suite.
Aug 24 05:25:36.545: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:25:36.685: INFO: namespace containers-3881 deletion completed in 6.165561194s

• [SLOW TEST:10.601 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:25:36.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-d4404ba5-3cf7-4724-93f9-a0b2a4aded51
STEP: Creating a pod to test consume secrets
Aug 24 05:25:36.787: INFO: Waiting up to 5m0s for pod "pod-secrets-a42dff4f-9860-4db4-a980-451cf265d15a" in namespace "secrets-9751" to be "success or failure"
Aug 24 05:25:36.853: INFO: Pod "pod-secrets-a42dff4f-9860-4db4-a980-451cf265d15a": Phase="Pending", Reason="", readiness=false. Elapsed: 65.85059ms
Aug 24 05:25:38.861: INFO: Pod "pod-secrets-a42dff4f-9860-4db4-a980-451cf265d15a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073094797s
Aug 24 05:25:40.868: INFO: Pod "pod-secrets-a42dff4f-9860-4db4-a980-451cf265d15a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080375703s
STEP: Saw pod success
Aug 24 05:25:40.868: INFO: Pod "pod-secrets-a42dff4f-9860-4db4-a980-451cf265d15a" satisfied condition "success or failure"
Aug 24 05:25:40.873: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a42dff4f-9860-4db4-a980-451cf265d15a container secret-env-test: 
STEP: delete the pod
Aug 24 05:25:40.899: INFO: Waiting for pod pod-secrets-a42dff4f-9860-4db4-a980-451cf265d15a to disappear
Aug 24 05:25:40.960: INFO: Pod pod-secrets-a42dff4f-9860-4db4-a980-451cf265d15a no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:25:40.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9751" for this suite.
Aug 24 05:25:47.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:25:47.139: INFO: namespace secrets-9751 deletion completed in 6.168131279s

• [SLOW TEST:10.450 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:25:47.140: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 05:26:13.292: INFO: Container started at 2020-08-24 05:25:49 +0000 UTC, pod became ready at 2020-08-24 05:26:11 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:26:13.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6667" for this suite.
Aug 24 05:26:35.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:26:35.464: INFO: namespace container-probe-6667 deletion completed in 22.161243448s

• [SLOW TEST:48.324 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:26:35.465: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Aug 24 05:26:39.625: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:26:39.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8903" for this suite.
Aug 24 05:26:45.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:26:45.844: INFO: namespace container-runtime-8903 deletion completed in 6.193546736s

• [SLOW TEST:10.380 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:26:45.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0824 05:27:16.128313       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 24 05:27:16.128: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:27:16.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-1564" for this suite.
Aug 24 05:27:22.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:27:22.374: INFO: namespace gc-1564 deletion completed in 6.2343834s

• [SLOW TEST:36.529 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:27:22.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-d86h
STEP: Creating a pod to test atomic-volume-subpath
Aug 24 05:27:22.517: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-d86h" in namespace "subpath-8954" to be "success or failure"
Aug 24 05:27:22.551: INFO: Pod "pod-subpath-test-secret-d86h": Phase="Pending", Reason="", readiness=false. Elapsed: 34.264836ms
Aug 24 05:27:24.558: INFO: Pod "pod-subpath-test-secret-d86h": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040541062s
Aug 24 05:27:26.563: INFO: Pod "pod-subpath-test-secret-d86h": Phase="Running", Reason="", readiness=true. Elapsed: 4.046407755s
Aug 24 05:27:28.571: INFO: Pod "pod-subpath-test-secret-d86h": Phase="Running", Reason="", readiness=true. Elapsed: 6.053476819s
Aug 24 05:27:30.578: INFO: Pod "pod-subpath-test-secret-d86h": Phase="Running", Reason="", readiness=true. Elapsed: 8.061092475s
Aug 24 05:27:32.585: INFO: Pod "pod-subpath-test-secret-d86h": Phase="Running", Reason="", readiness=true. Elapsed: 10.067938033s
Aug 24 05:27:34.593: INFO: Pod "pod-subpath-test-secret-d86h": Phase="Running", Reason="", readiness=true. Elapsed: 12.075631579s
Aug 24 05:27:36.600: INFO: Pod "pod-subpath-test-secret-d86h": Phase="Running", Reason="", readiness=true. Elapsed: 14.082752622s
Aug 24 05:27:38.607: INFO: Pod "pod-subpath-test-secret-d86h": Phase="Running", Reason="", readiness=true. Elapsed: 16.090066805s
Aug 24 05:27:40.615: INFO: Pod "pod-subpath-test-secret-d86h": Phase="Running", Reason="", readiness=true. Elapsed: 18.097847142s
Aug 24 05:27:42.623: INFO: Pod "pod-subpath-test-secret-d86h": Phase="Running", Reason="", readiness=true. Elapsed: 20.105450546s
Aug 24 05:27:44.630: INFO: Pod "pod-subpath-test-secret-d86h": Phase="Running", Reason="", readiness=true. Elapsed: 22.113288985s
Aug 24 05:27:46.638: INFO: Pod "pod-subpath-test-secret-d86h": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.120559366s
STEP: Saw pod success
Aug 24 05:27:46.638: INFO: Pod "pod-subpath-test-secret-d86h" satisfied condition "success or failure"
Aug 24 05:27:46.643: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-d86h container test-container-subpath-secret-d86h: 
STEP: delete the pod
Aug 24 05:27:46.725: INFO: Waiting for pod pod-subpath-test-secret-d86h to disappear
Aug 24 05:27:46.751: INFO: Pod pod-subpath-test-secret-d86h no longer exists
STEP: Deleting pod pod-subpath-test-secret-d86h
Aug 24 05:27:46.751: INFO: Deleting pod "pod-subpath-test-secret-d86h" in namespace "subpath-8954"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:27:46.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8954" for this suite.
Aug 24 05:27:52.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:27:52.989: INFO: namespace subpath-8954 deletion completed in 6.221443945s

• [SLOW TEST:30.613 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:27:52.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 24 05:28:01.182: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 24 05:28:01.195: INFO: Pod pod-with-poststart-http-hook still exists
Aug 24 05:28:03.196: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 24 05:28:03.203: INFO: Pod pod-with-poststart-http-hook still exists
Aug 24 05:28:05.196: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 24 05:28:05.203: INFO: Pod pod-with-poststart-http-hook still exists
Aug 24 05:28:07.196: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 24 05:28:07.204: INFO: Pod pod-with-poststart-http-hook still exists
Aug 24 05:28:09.196: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 24 05:28:09.203: INFO: Pod pod-with-poststart-http-hook still exists
Aug 24 05:28:11.196: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 24 05:28:11.205: INFO: Pod pod-with-poststart-http-hook still exists
Aug 24 05:28:13.196: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 24 05:28:13.204: INFO: Pod pod-with-poststart-http-hook still exists
Aug 24 05:28:15.196: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Aug 24 05:28:15.203: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:28:15.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-5897" for this suite.
Aug 24 05:28:37.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:28:37.400: INFO: namespace container-lifecycle-hook-5897 deletion completed in 22.188498049s

• [SLOW TEST:44.406 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:28:37.402: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-397e0d3f-1581-4ae9-9dd1-2ebbfaf9e2ab
STEP: Creating a pod to test consume configMaps
Aug 24 05:28:37.509: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a151a321-65d0-477e-8974-6094033f0cac" in namespace "projected-1868" to be "success or failure"
Aug 24 05:28:37.525: INFO: Pod "pod-projected-configmaps-a151a321-65d0-477e-8974-6094033f0cac": Phase="Pending", Reason="", readiness=false. Elapsed: 16.009427ms
Aug 24 05:28:39.569: INFO: Pod "pod-projected-configmaps-a151a321-65d0-477e-8974-6094033f0cac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059392673s
Aug 24 05:28:41.576: INFO: Pod "pod-projected-configmaps-a151a321-65d0-477e-8974-6094033f0cac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066446927s
STEP: Saw pod success
Aug 24 05:28:41.576: INFO: Pod "pod-projected-configmaps-a151a321-65d0-477e-8974-6094033f0cac" satisfied condition "success or failure"
Aug 24 05:28:41.580: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-a151a321-65d0-477e-8974-6094033f0cac container projected-configmap-volume-test: 
STEP: delete the pod
Aug 24 05:28:41.604: INFO: Waiting for pod pod-projected-configmaps-a151a321-65d0-477e-8974-6094033f0cac to disappear
Aug 24 05:28:41.608: INFO: Pod pod-projected-configmaps-a151a321-65d0-477e-8974-6094033f0cac no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:28:41.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1868" for this suite.
Aug 24 05:28:47.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:28:47.803: INFO: namespace projected-1868 deletion completed in 6.188072114s

• [SLOW TEST:10.401 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:28:47.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-120c7f32-e942-4d1c-b14f-a4a6a4ae19f7 in namespace container-probe-7826
Aug 24 05:28:51.918: INFO: Started pod liveness-120c7f32-e942-4d1c-b14f-a4a6a4ae19f7 in namespace container-probe-7826
STEP: checking the pod's current state and verifying that restartCount is present
Aug 24 05:28:51.924: INFO: Initial restart count of pod liveness-120c7f32-e942-4d1c-b14f-a4a6a4ae19f7 is 0
Aug 24 05:29:03.972: INFO: Restart count of pod container-probe-7826/liveness-120c7f32-e942-4d1c-b14f-a4a6a4ae19f7 is now 1 (12.048206752s elapsed)
Aug 24 05:29:24.115: INFO: Restart count of pod container-probe-7826/liveness-120c7f32-e942-4d1c-b14f-a4a6a4ae19f7 is now 2 (32.190833613s elapsed)
Aug 24 05:29:44.186: INFO: Restart count of pod container-probe-7826/liveness-120c7f32-e942-4d1c-b14f-a4a6a4ae19f7 is now 3 (52.26174601s elapsed)
Aug 24 05:30:04.273: INFO: Restart count of pod container-probe-7826/liveness-120c7f32-e942-4d1c-b14f-a4a6a4ae19f7 is now 4 (1m12.348475569s elapsed)
Aug 24 05:31:04.482: INFO: Restart count of pod container-probe-7826/liveness-120c7f32-e942-4d1c-b14f-a4a6a4ae19f7 is now 5 (2m12.558270406s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:31:04.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7826" for this suite.
Aug 24 05:31:10.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:31:10.689: INFO: namespace container-probe-7826 deletion completed in 6.177917146s

• [SLOW TEST:142.881 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:31:10.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 24 05:31:10.850: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:10.858: INFO: Number of nodes with available pods: 0
Aug 24 05:31:10.858: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:11.922: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:11.929: INFO: Number of nodes with available pods: 0
Aug 24 05:31:11.929: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:12.871: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:12.879: INFO: Number of nodes with available pods: 0
Aug 24 05:31:12.879: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:13.986: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:14.012: INFO: Number of nodes with available pods: 0
Aug 24 05:31:14.012: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:14.875: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:14.883: INFO: Number of nodes with available pods: 1
Aug 24 05:31:14.883: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:15.872: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:15.889: INFO: Number of nodes with available pods: 2
Aug 24 05:31:15.889: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 24 05:31:15.926: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:15.934: INFO: Number of nodes with available pods: 1
Aug 24 05:31:15.934: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:16.948: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:16.956: INFO: Number of nodes with available pods: 1
Aug 24 05:31:16.956: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:17.948: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:17.954: INFO: Number of nodes with available pods: 1
Aug 24 05:31:17.954: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:18.946: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:18.953: INFO: Number of nodes with available pods: 1
Aug 24 05:31:18.954: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:19.948: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:19.955: INFO: Number of nodes with available pods: 1
Aug 24 05:31:19.955: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:20.945: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:20.952: INFO: Number of nodes with available pods: 1
Aug 24 05:31:20.952: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:21.946: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:21.951: INFO: Number of nodes with available pods: 1
Aug 24 05:31:21.951: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:22.948: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:22.955: INFO: Number of nodes with available pods: 1
Aug 24 05:31:22.955: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:23.946: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:23.954: INFO: Number of nodes with available pods: 1
Aug 24 05:31:23.955: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:24.944: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:24.949: INFO: Number of nodes with available pods: 1
Aug 24 05:31:24.949: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:26.006: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:26.015: INFO: Number of nodes with available pods: 1
Aug 24 05:31:26.015: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:26.947: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:26.954: INFO: Number of nodes with available pods: 1
Aug 24 05:31:26.954: INFO: Node iruya-worker is running more than one daemon pod
Aug 24 05:31:27.948: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 24 05:31:27.955: INFO: Number of nodes with available pods: 2
Aug 24 05:31:27.956: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3088, will wait for the garbage collector to delete the pods
Aug 24 05:31:28.025: INFO: Deleting DaemonSet.extensions daemon-set took: 8.88943ms
Aug 24 05:31:28.326: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.803896ms
Aug 24 05:31:33.732: INFO: Number of nodes with available pods: 0
Aug 24 05:31:33.732: INFO: Number of running nodes: 0, number of available pods: 0
Aug 24 05:31:33.737: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3088/daemonsets","resourceVersion":"2299803"},"items":null}

Aug 24 05:31:33.741: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3088/pods","resourceVersion":"2299803"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:31:33.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3088" for this suite.
Aug 24 05:31:39.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:31:39.919: INFO: namespace daemonsets-3088 deletion completed in 6.148083155s

• [SLOW TEST:29.227 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:31:39.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 05:31:40.058: INFO: Waiting up to 5m0s for pod "downwardapi-volume-56374be2-2cd2-497d-95c9-316c64a0e7e8" in namespace "downward-api-5310" to be "success or failure"
Aug 24 05:31:40.068: INFO: Pod "downwardapi-volume-56374be2-2cd2-497d-95c9-316c64a0e7e8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.482503ms
Aug 24 05:31:42.170: INFO: Pod "downwardapi-volume-56374be2-2cd2-497d-95c9-316c64a0e7e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.111882531s
Aug 24 05:31:44.178: INFO: Pod "downwardapi-volume-56374be2-2cd2-497d-95c9-316c64a0e7e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119495066s
STEP: Saw pod success
Aug 24 05:31:44.178: INFO: Pod "downwardapi-volume-56374be2-2cd2-497d-95c9-316c64a0e7e8" satisfied condition "success or failure"
Aug 24 05:31:44.186: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-56374be2-2cd2-497d-95c9-316c64a0e7e8 container client-container: 
STEP: delete the pod
Aug 24 05:31:44.271: INFO: Waiting for pod downwardapi-volume-56374be2-2cd2-497d-95c9-316c64a0e7e8 to disappear
Aug 24 05:31:44.327: INFO: Pod downwardapi-volume-56374be2-2cd2-497d-95c9-316c64a0e7e8 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:31:44.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5310" for this suite.
Aug 24 05:31:50.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:31:50.606: INFO: namespace downward-api-5310 deletion completed in 6.2675601s

• [SLOW TEST:10.687 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:31:50.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-1c5f9160-8f93-4bb5-81ce-7c3716bf1a84
STEP: Creating a pod to test consume configMaps
Aug 24 05:31:50.725: INFO: Waiting up to 5m0s for pod "pod-configmaps-6b6bdb9f-c287-40ec-8889-612db2351ad6" in namespace "configmap-4432" to be "success or failure"
Aug 24 05:31:50.741: INFO: Pod "pod-configmaps-6b6bdb9f-c287-40ec-8889-612db2351ad6": Phase="Pending", Reason="", readiness=false. Elapsed: 16.444449ms
Aug 24 05:31:52.749: INFO: Pod "pod-configmaps-6b6bdb9f-c287-40ec-8889-612db2351ad6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024181956s
Aug 24 05:31:54.757: INFO: Pod "pod-configmaps-6b6bdb9f-c287-40ec-8889-612db2351ad6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032012916s
STEP: Saw pod success
Aug 24 05:31:54.757: INFO: Pod "pod-configmaps-6b6bdb9f-c287-40ec-8889-612db2351ad6" satisfied condition "success or failure"
Aug 24 05:31:54.762: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-6b6bdb9f-c287-40ec-8889-612db2351ad6 container configmap-volume-test: 
STEP: delete the pod
Aug 24 05:31:54.785: INFO: Waiting for pod pod-configmaps-6b6bdb9f-c287-40ec-8889-612db2351ad6 to disappear
Aug 24 05:31:54.795: INFO: Pod pod-configmaps-6b6bdb9f-c287-40ec-8889-612db2351ad6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:31:54.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4432" for this suite.
Aug 24 05:32:00.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:32:00.958: INFO: namespace configmap-4432 deletion completed in 6.153126203s

• [SLOW TEST:10.351 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:32:00.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-cc2f081a-629a-40fe-9273-416aaf052c23
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:32:05.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9694" for this suite.
Aug 24 05:32:27.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:32:27.480: INFO: namespace configmap-9694 deletion completed in 22.197016851s

• [SLOW TEST:26.521 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:32:27.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4719
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-4719
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4719
Aug 24 05:32:27.597: INFO: Found 0 stateful pods, waiting for 1
Aug 24 05:32:37.605: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 24 05:32:37.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 24 05:32:41.828: INFO: stderr: "I0824 05:32:41.629011    3042 log.go:172] (0x29bc0e0) (0x29bc150) Create stream\nI0824 05:32:41.632194    3042 log.go:172] (0x29bc0e0) (0x29bc150) Stream added, broadcasting: 1\nI0824 05:32:41.650676    3042 log.go:172] (0x29bc0e0) Reply frame received for 1\nI0824 05:32:41.651164    3042 log.go:172] (0x29bc0e0) (0x2666000) Create stream\nI0824 05:32:41.651235    3042 log.go:172] (0x29bc0e0) (0x2666000) Stream added, broadcasting: 3\nI0824 05:32:41.652472    3042 log.go:172] (0x29bc0e0) Reply frame received for 3\nI0824 05:32:41.652873    3042 log.go:172] (0x29bc0e0) (0x24ae380) Create stream\nI0824 05:32:41.652973    3042 log.go:172] (0x29bc0e0) (0x24ae380) Stream added, broadcasting: 5\nI0824 05:32:41.654200    3042 log.go:172] (0x29bc0e0) Reply frame received for 5\nI0824 05:32:41.740438    3042 log.go:172] (0x29bc0e0) Data frame received for 5\nI0824 05:32:41.740951    3042 log.go:172] (0x24ae380) (5) Data frame handling\nI0824 05:32:41.741581    3042 log.go:172] (0x24ae380) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0824 05:32:41.803338    3042 log.go:172] (0x29bc0e0) Data frame received for 3\nI0824 05:32:41.803563    3042 log.go:172] (0x2666000) (3) Data frame handling\nI0824 05:32:41.803751    3042 log.go:172] (0x2666000) (3) Data frame sent\nI0824 05:32:41.803890    3042 log.go:172] (0x29bc0e0) Data frame received for 3\nI0824 05:32:41.804042    3042 log.go:172] (0x2666000) (3) Data frame handling\nI0824 05:32:41.804335    3042 log.go:172] (0x29bc0e0) Data frame received for 5\nI0824 05:32:41.804587    3042 log.go:172] (0x24ae380) (5) Data frame handling\nI0824 05:32:41.805301    3042 log.go:172] (0x29bc0e0) Data frame received for 1\nI0824 05:32:41.805464    3042 log.go:172] (0x29bc150) (1) Data frame handling\nI0824 05:32:41.805630    3042 log.go:172] (0x29bc150) (1) Data frame sent\nI0824 05:32:41.807866    3042 log.go:172] (0x29bc0e0) (0x29bc150) Stream removed, broadcasting: 1\nI0824 05:32:41.808440    3042 log.go:172] (0x29bc0e0) Go away received\nI0824 05:32:41.812333    3042 log.go:172] (0x29bc0e0) (0x29bc150) Stream removed, broadcasting: 1\nI0824 05:32:41.812892    3042 log.go:172] (0x29bc0e0) (0x2666000) Stream removed, broadcasting: 3\nI0824 05:32:41.813083    3042 log.go:172] (0x29bc0e0) (0x24ae380) Stream removed, broadcasting: 5\n"
Aug 24 05:32:41.828: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 24 05:32:41.829: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 24 05:32:41.834: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 24 05:32:51.854: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 24 05:32:51.855: INFO: Waiting for statefulset status.replicas updated to 0
Aug 24 05:32:51.883: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 24 05:32:51.885: INFO: ss-0  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:41 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  }]
Aug 24 05:32:51.885: INFO: 
Aug 24 05:32:51.885: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 24 05:32:52.900: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991952605s
Aug 24 05:32:54.194: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.976853206s
Aug 24 05:32:55.211: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.681795459s
Aug 24 05:32:56.235: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.66592864s
Aug 24 05:32:57.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.642070697s
Aug 24 05:32:58.255: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.632828564s
Aug 24 05:32:59.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.622706676s
Aug 24 05:33:00.274: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.61280854s
Aug 24 05:33:01.284: INFO: Verifying statefulset ss doesn't scale past 3 for another 603.066051ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4719
Aug 24 05:33:02.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:33:03.694: INFO: stderr: "I0824 05:33:03.586127    3077 log.go:172] (0x2b465b0) (0x2b46620) Create stream\nI0824 05:33:03.588471    3077 log.go:172] (0x2b465b0) (0x2b46620) Stream added, broadcasting: 1\nI0824 05:33:03.599413    3077 log.go:172] (0x2b465b0) Reply frame received for 1\nI0824 05:33:03.600202    3077 log.go:172] (0x2b465b0) (0x2820cb0) Create stream\nI0824 05:33:03.600297    3077 log.go:172] (0x2b465b0) (0x2820cb0) Stream added, broadcasting: 3\nI0824 05:33:03.601997    3077 log.go:172] (0x2b465b0) Reply frame received for 3\nI0824 05:33:03.602271    3077 log.go:172] (0x2b465b0) (0x24ac850) Create stream\nI0824 05:33:03.602347    3077 log.go:172] (0x2b465b0) (0x24ac850) Stream added, broadcasting: 5\nI0824 05:33:03.603919    3077 log.go:172] (0x2b465b0) Reply frame received for 5\nI0824 05:33:03.673197    3077 log.go:172] (0x2b465b0) Data frame received for 5\nI0824 05:33:03.673480    3077 log.go:172] (0x2b465b0) Data frame received for 1\nI0824 05:33:03.673740    3077 log.go:172] (0x2b46620) (1) Data frame handling\nI0824 05:33:03.674368    3077 log.go:172] (0x24ac850) (5) Data frame handling\nI0824 05:33:03.674746    3077 log.go:172] (0x2b465b0) Data frame received for 3\nI0824 05:33:03.674968    3077 log.go:172] (0x2820cb0) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0824 05:33:03.675880    3077 log.go:172] (0x24ac850) (5) Data frame sent\nI0824 05:33:03.676317    3077 log.go:172] (0x2b46620) (1) Data frame sent\nI0824 05:33:03.676855    3077 log.go:172] (0x2820cb0) (3) Data frame sent\nI0824 05:33:03.677008    3077 log.go:172] (0x2b465b0) Data frame received for 5\nI0824 05:33:03.677118    3077 log.go:172] (0x24ac850) (5) Data frame handling\nI0824 05:33:03.677815    3077 log.go:172] (0x2b465b0) Data frame received for 3\nI0824 05:33:03.678551    3077 log.go:172] (0x2b465b0) (0x2b46620) Stream removed, broadcasting: 1\nI0824 05:33:03.679588    3077 log.go:172] (0x2820cb0) (3) Data frame handling\nI0824 05:33:03.679876    3077 log.go:172] (0x2b465b0) Go away received\nI0824 05:33:03.681677    3077 log.go:172] (0x2b465b0) (0x2b46620) Stream removed, broadcasting: 1\nI0824 05:33:03.681958    3077 log.go:172] (0x2b465b0) (0x2820cb0) Stream removed, broadcasting: 3\nI0824 05:33:03.682117    3077 log.go:172] (0x2b465b0) (0x24ac850) Stream removed, broadcasting: 5\n"
Aug 24 05:33:03.695: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 24 05:33:03.695: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 24 05:33:03.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:33:05.126: INFO: stderr: "I0824 05:33:04.987888    3100 log.go:172] (0x24ba690) (0x24ba8c0) Create stream\nI0824 05:33:04.990412    3100 log.go:172] (0x24ba690) (0x24ba8c0) Stream added, broadcasting: 1\nI0824 05:33:05.003831    3100 log.go:172] (0x24ba690) Reply frame received for 1\nI0824 05:33:05.004536    3100 log.go:172] (0x24ba690) (0x2952000) Create stream\nI0824 05:33:05.004633    3100 log.go:172] (0x24ba690) (0x2952000) Stream added, broadcasting: 3\nI0824 05:33:05.006776    3100 log.go:172] (0x24ba690) Reply frame received for 3\nI0824 05:33:05.007286    3100 log.go:172] (0x24ba690) (0x283a000) Create stream\nI0824 05:33:05.007426    3100 log.go:172] (0x24ba690) (0x283a000) Stream added, broadcasting: 5\nI0824 05:33:05.009719    3100 log.go:172] (0x24ba690) Reply frame received for 5\nI0824 05:33:05.106307    3100 log.go:172] (0x24ba690) Data frame received for 3\nI0824 05:33:05.106747    3100 log.go:172] (0x24ba690) Data frame received for 5\nI0824 05:33:05.106984    3100 log.go:172] (0x283a000) (5) Data frame handling\nI0824 05:33:05.107161    3100 log.go:172] (0x2952000) (3) Data frame handling\nI0824 05:33:05.107999    3100 log.go:172] (0x283a000) (5) Data frame sent\nI0824 05:33:05.108377    3100 log.go:172] (0x2952000) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0824 05:33:05.108698    3100 log.go:172] (0x24ba690) Data frame received for 1\nI0824 05:33:05.108970    3100 log.go:172] (0x24ba8c0) (1) Data frame handling\nI0824 05:33:05.109128    3100 log.go:172] (0x24ba690) Data frame received for 3\nI0824 05:33:05.109235    3100 log.go:172] (0x2952000) (3) Data frame handling\nI0824 05:33:05.109395    3100 log.go:172] (0x24ba690) Data frame received for 5\nI0824 05:33:05.109587    3100 log.go:172] (0x283a000) (5) Data frame handling\nI0824 05:33:05.109755    3100 log.go:172] (0x24ba8c0) (1) Data frame sent\nI0824 05:33:05.112127    3100 log.go:172] (0x24ba690) (0x24ba8c0) Stream removed, broadcasting: 1\nI0824 05:33:05.113255    3100 log.go:172] (0x24ba690) Go away received\nI0824 05:33:05.115743    3100 log.go:172] (0x24ba690) (0x24ba8c0) Stream removed, broadcasting: 1\nI0824 05:33:05.115997    3100 log.go:172] (0x24ba690) (0x2952000) Stream removed, broadcasting: 3\nI0824 05:33:05.116170    3100 log.go:172] (0x24ba690) (0x283a000) Stream removed, broadcasting: 5\n"
Aug 24 05:33:05.127: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 24 05:33:05.127: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 24 05:33:05.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:33:06.485: INFO: stderr: "I0824 05:33:06.395994    3123 log.go:172] (0x29d04d0) (0x29d0540) Create stream\nI0824 05:33:06.398269    3123 log.go:172] (0x29d04d0) (0x29d0540) Stream added, broadcasting: 1\nI0824 05:33:06.413998    3123 log.go:172] (0x29d04d0) Reply frame received for 1\nI0824 05:33:06.414695    3123 log.go:172] (0x29d04d0) (0x26631f0) Create stream\nI0824 05:33:06.414801    3123 log.go:172] (0x29d04d0) (0x26631f0) Stream added, broadcasting: 3\nI0824 05:33:06.416221    3123 log.go:172] (0x29d04d0) Reply frame received for 3\nI0824 05:33:06.416454    3123 log.go:172] (0x29d04d0) (0x2a5a000) Create stream\nI0824 05:33:06.416529    3123 log.go:172] (0x29d04d0) (0x2a5a000) Stream added, broadcasting: 5\nI0824 05:33:06.417948    3123 log.go:172] (0x29d04d0) Reply frame received for 5\nI0824 05:33:06.463585    3123 log.go:172] (0x29d04d0) Data frame received for 3\nI0824 05:33:06.463863    3123 log.go:172] (0x29d04d0) Data frame received for 5\nI0824 05:33:06.464102    3123 log.go:172] (0x2a5a000) (5) Data frame handling\nI0824 05:33:06.464211    3123 log.go:172] (0x26631f0) (3) Data frame handling\nI0824 05:33:06.464460    3123 log.go:172] (0x29d04d0) Data frame received for 1\nI0824 05:33:06.464565    3123 log.go:172] (0x29d0540) (1) Data frame handling\nI0824 05:33:06.464935    3123 log.go:172] (0x26631f0) (3) Data frame sent\nI0824 05:33:06.465217    3123 log.go:172] (0x29d0540) (1) Data frame sent\nI0824 05:33:06.465365    3123 log.go:172] (0x2a5a000) (5) Data frame sent\nI0824 05:33:06.465641    3123 log.go:172] (0x29d04d0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0824 05:33:06.465793    3123 log.go:172] (0x2a5a000) (5) Data frame handling\nI0824 05:33:06.465893    3123 log.go:172] (0x29d04d0) Data frame received for 3\nI0824 05:33:06.466011    3123 log.go:172] (0x26631f0) (3) Data frame handling\nI0824 05:33:06.468374    3123 log.go:172] (0x29d04d0) (0x29d0540) Stream removed, broadcasting: 1\nI0824 05:33:06.469942    3123 log.go:172] (0x29d04d0) Go away received\nI0824 05:33:06.472116    3123 log.go:172] (0x29d04d0) (0x29d0540) Stream removed, broadcasting: 1\nI0824 05:33:06.472338    3123 log.go:172] (0x29d04d0) (0x26631f0) Stream removed, broadcasting: 3\nI0824 05:33:06.472462    3123 log.go:172] (0x29d04d0) (0x2a5a000) Stream removed, broadcasting: 5\n"
Aug 24 05:33:06.486: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 24 05:33:06.486: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 24 05:33:06.494: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 05:33:06.494: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 24 05:33:06.494: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 24 05:33:06.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 24 05:33:07.879: INFO: stderr: "I0824 05:33:07.772048    3146 log.go:172] (0x26780e0) (0x2678a10) Create stream\nI0824 05:33:07.774932    3146 log.go:172] (0x26780e0) (0x2678a10) Stream added, broadcasting: 1\nI0824 05:33:07.787526    3146 log.go:172] (0x26780e0) Reply frame received for 1\nI0824 05:33:07.788391    3146 log.go:172] (0x26780e0) (0x24a4310) Create stream\nI0824 05:33:07.788509    3146 log.go:172] (0x26780e0) (0x24a4310) Stream added, broadcasting: 3\nI0824 05:33:07.790720    3146 log.go:172] (0x26780e0) Reply frame received for 3\nI0824 05:33:07.791150    3146 log.go:172] (0x26780e0) (0x2833ea0) Create stream\nI0824 05:33:07.791271    3146 log.go:172] (0x26780e0) (0x2833ea0) Stream added, broadcasting: 5\nI0824 05:33:07.793039    3146 log.go:172] (0x26780e0) Reply frame received for 5\nI0824 05:33:07.859589    3146 log.go:172] (0x26780e0) Data frame received for 5\nI0824 05:33:07.859948    3146 log.go:172] (0x26780e0) Data frame received for 3\nI0824 05:33:07.860149    3146 log.go:172] (0x24a4310) (3) Data frame handling\nI0824 05:33:07.860581    3146 log.go:172] (0x26780e0) Data frame received for 1\nI0824 05:33:07.860696    3146 log.go:172] (0x2678a10) (1) Data frame handling\nI0824 05:33:07.861030    3146 log.go:172] (0x2833ea0) (5) Data frame handling\nI0824 05:33:07.862512    3146 log.go:172] (0x24a4310) (3) Data frame sent\nI0824 05:33:07.862765    3146 log.go:172] (0x2678a10) (1) Data frame sent\nI0824 05:33:07.862875    3146 log.go:172] (0x2833ea0) (5) Data frame sent\nI0824 05:33:07.863223    3146 log.go:172] (0x26780e0) Data frame received for 5\nI0824 05:33:07.863293    3146 log.go:172] (0x2833ea0) (5) Data frame handling\nI0824 05:33:07.863454    3146 log.go:172] (0x26780e0) Data frame received for 3\nI0824 05:33:07.863532    3146 log.go:172] (0x24a4310) (3) Data frame handling\nI0824 05:33:07.864359    3146 log.go:172] (0x26780e0) (0x2678a10) Stream removed, broadcasting: 1\nI0824 05:33:07.864692    3146 log.go:172] (0x26780e0) Go away received\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0824 05:33:07.868431    3146 log.go:172] (0x26780e0) (0x2678a10) Stream removed, broadcasting: 1\nI0824 05:33:07.868678    3146 log.go:172] (0x26780e0) (0x24a4310) Stream removed, broadcasting: 3\nI0824 05:33:07.868959    3146 log.go:172] (0x26780e0) (0x2833ea0) Stream removed, broadcasting: 5\n"
Aug 24 05:33:07.879: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 24 05:33:07.879: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 24 05:33:07.880: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 24 05:33:09.318: INFO: stderr: "I0824 05:33:09.146062    3170 log.go:172] (0x2ac5570) (0x2ac55e0) Create stream\nI0824 05:33:09.147866    3170 log.go:172] (0x2ac5570) (0x2ac55e0) Stream added, broadcasting: 1\nI0824 05:33:09.164490    3170 log.go:172] (0x2ac5570) Reply frame received for 1\nI0824 05:33:09.165244    3170 log.go:172] (0x2ac5570) (0x2652150) Create stream\nI0824 05:33:09.165336    3170 log.go:172] (0x2ac5570) (0x2652150) Stream added, broadcasting: 3\nI0824 05:33:09.166722    3170 log.go:172] (0x2ac5570) Reply frame received for 3\nI0824 05:33:09.167019    3170 log.go:172] (0x2ac5570) (0x24a2930) Create stream\nI0824 05:33:09.167101    3170 log.go:172] (0x2ac5570) (0x24a2930) Stream added, broadcasting: 5\nI0824 05:33:09.168488    3170 log.go:172] (0x2ac5570) Reply frame received for 5\nI0824 05:33:09.260688    3170 log.go:172] (0x2ac5570) Data frame received for 5\nI0824 05:33:09.261158    3170 log.go:172] (0x24a2930) (5) Data frame handling\nI0824 05:33:09.261975    3170 log.go:172] (0x24a2930) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0824 05:33:09.296149    3170 log.go:172] (0x2ac5570) Data frame received for 5\nI0824 05:33:09.296369    3170 log.go:172] (0x24a2930) (5) Data frame handling\nI0824 05:33:09.296550    3170 log.go:172] (0x2ac5570) Data frame received for 3\nI0824 05:33:09.296847    3170 log.go:172] (0x2652150) (3) Data frame handling\nI0824 05:33:09.297037    3170 log.go:172] (0x2652150) (3) Data frame sent\nI0824 05:33:09.297163    3170 log.go:172] (0x2ac5570) Data frame received for 3\nI0824 05:33:09.297278    3170 log.go:172] (0x2652150) (3) Data frame handling\nI0824 05:33:09.297727    3170 log.go:172] (0x2ac5570) Data frame received for 1\nI0824 05:33:09.297844    3170 log.go:172] (0x2ac55e0) (1) Data frame handling\nI0824 05:33:09.297977    3170 log.go:172] (0x2ac55e0) (1) Data frame sent\nI0824 05:33:09.298678    3170 log.go:172] (0x2ac5570) (0x2ac55e0) Stream removed, broadcasting: 1\nI0824 05:33:09.301865    3170 log.go:172] (0x2ac5570) Go away received\nI0824 05:33:09.304534    3170 log.go:172] (0x2ac5570) (0x2ac55e0) Stream removed, broadcasting: 1\nI0824 05:33:09.305106    3170 log.go:172] (0x2ac5570) (0x2652150) Stream removed, broadcasting: 3\nI0824 05:33:09.305513    3170 log.go:172] (0x2ac5570) (0x24a2930) Stream removed, broadcasting: 5\n"
Aug 24 05:33:09.319: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 24 05:33:09.319: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 24 05:33:09.319: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 24 05:33:10.815: INFO: stderr: "I0824 05:33:10.658387    3194 log.go:172] (0x281de30) (0x2990000) Create stream\nI0824 05:33:10.662613    3194 log.go:172] (0x281de30) (0x2990000) Stream added, broadcasting: 1\nI0824 05:33:10.673474    3194 log.go:172] (0x281de30) Reply frame received for 1\nI0824 05:33:10.673996    3194 log.go:172] (0x281de30) (0x24ac310) Create stream\nI0824 05:33:10.674086    3194 log.go:172] (0x281de30) (0x24ac310) Stream added, broadcasting: 3\nI0824 05:33:10.675719    3194 log.go:172] (0x281de30) Reply frame received for 3\nI0824 05:33:10.676233    3194 log.go:172] (0x281de30) (0x24ac8c0) Create stream\nI0824 05:33:10.676359    3194 log.go:172] (0x281de30) (0x24ac8c0) Stream added, broadcasting: 5\nI0824 05:33:10.678365    3194 log.go:172] (0x281de30) Reply frame received for 5\nI0824 05:33:10.763267    3194 log.go:172] (0x281de30) Data frame received for 5\nI0824 05:33:10.763608    3194 log.go:172] (0x24ac8c0) (5) Data frame handling\nI0824 05:33:10.764189    3194 log.go:172] (0x24ac8c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0824 05:33:10.794126    3194 log.go:172] (0x281de30) Data frame received for 3\nI0824 05:33:10.794379    3194 log.go:172] (0x24ac310) (3) Data frame handling\nI0824 05:33:10.794600    3194 log.go:172] (0x24ac310) (3) Data frame sent\nI0824 05:33:10.794815    3194 log.go:172] (0x281de30) Data frame received for 3\nI0824 05:33:10.795031    3194 log.go:172] (0x24ac310) (3) Data frame handling\nI0824 05:33:10.795371    3194 log.go:172] (0x281de30) Data frame received for 5\nI0824 05:33:10.795516    3194 log.go:172] (0x24ac8c0) (5) Data frame handling\nI0824 05:33:10.795992    3194 log.go:172] (0x281de30) Data frame received for 1\nI0824 05:33:10.796231    3194 log.go:172] (0x2990000) (1) Data frame handling\nI0824 05:33:10.796456    3194 log.go:172] (0x2990000) (1) Data frame sent\nI0824 05:33:10.797561    3194 log.go:172] (0x281de30) (0x2990000) Stream removed, broadcasting: 1\nI0824 05:33:10.799477    3194 log.go:172] (0x281de30) (0x2990000) Stream removed, broadcasting: 1\nI0824 05:33:10.799887    3194 log.go:172] (0x281de30) (0x24ac310) Stream removed, broadcasting: 3\nI0824 05:33:10.802756    3194 log.go:172] (0x281de30) Go away received\nI0824 05:33:10.803734    3194 log.go:172] (0x281de30) (0x24ac8c0) Stream removed, broadcasting: 5\n"
Aug 24 05:33:10.816: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 24 05:33:10.816: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 24 05:33:10.816: INFO: Waiting for statefulset status.replicas updated to 0
Aug 24 05:33:10.821: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Aug 24 05:33:20.838: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 24 05:33:20.839: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 24 05:33:20.839: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 24 05:33:20.859: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 24 05:33:20.859: INFO: ss-0  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  }]
Aug 24 05:33:20.859: INFO: ss-1  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  }]
Aug 24 05:33:20.860: INFO: ss-2  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  }]
Aug 24 05:33:20.860: INFO: 
Aug 24 05:33:20.860: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 24 05:33:21.951: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 24 05:33:21.952: INFO: ss-0  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  }]
Aug 24 05:33:21.953: INFO: ss-1  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  }]
Aug 24 05:33:21.953: INFO: ss-2  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  }]
Aug 24 05:33:21.953: INFO: 
Aug 24 05:33:21.953: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 24 05:33:22.964: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 24 05:33:22.964: INFO: ss-0  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  }]
Aug 24 05:33:22.964: INFO: ss-1  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  }]
Aug 24 05:33:22.965: INFO: ss-2  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  }]
Aug 24 05:33:22.965: INFO: 
Aug 24 05:33:22.965: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 24 05:33:23.974: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 24 05:33:23.974: INFO: ss-0  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  }]
Aug 24 05:33:23.974: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  }]
Aug 24 05:33:23.975: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:11 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  }]
Aug 24 05:33:23.975: INFO: 
Aug 24 05:33:23.975: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 24 05:33:24.984: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 24 05:33:24.984: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  }]
Aug 24 05:33:24.984: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  }]
Aug 24 05:33:24.985: INFO: 
Aug 24 05:33:24.985: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 24 05:33:25.996: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 24 05:33:25.996: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  }]
Aug 24 05:33:25.997: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  }]
Aug 24 05:33:25.997: INFO: 
Aug 24 05:33:25.997: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 24 05:33:27.006: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 24 05:33:27.006: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  }]
Aug 24 05:33:27.007: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  }]
Aug 24 05:33:27.008: INFO: 
Aug 24 05:33:27.008: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 24 05:33:28.016: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 24 05:33:28.016: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  }]
Aug 24 05:33:28.016: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  }]
Aug 24 05:33:28.017: INFO: 
Aug 24 05:33:28.017: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 24 05:33:29.026: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 24 05:33:29.026: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  }]
Aug 24 05:33:29.027: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  }]
Aug 24 05:33:29.027: INFO: 
Aug 24 05:33:29.027: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 24 05:33:30.035: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 24 05:33:30.035: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:07 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:27 +0000 UTC  }]
Aug 24 05:33:30.036: INFO: ss-1  iruya-worker   Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:33:09 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:32:51 +0000 UTC  }]
Aug 24 05:33:30.036: INFO: 
Aug 24 05:33:30.036: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4719
Aug 24 05:33:31.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:33:32.365: INFO: rc: 1
Aug 24 05:33:32.366: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0x9389590 exit status 1   true [0x7914878 0x7914898 0x79148b8] [0x7914878 0x7914898 0x79148b8] [0x7914890 0x79148b0] [0x6bbb70 0x6bbb70] 0x8b2d940 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Aug 24 05:33:42.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:33:43.475: INFO: rc: 1
Aug 24 05:33:43.475: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x71a16b0 exit status 1   true [0x98a14f8 0x98a1518 0x98a1538] [0x98a14f8 0x98a1518 0x98a1538] [0x98a1510 0x98a1530] [0x6bbb70 0x6bbb70] 0x9986cc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:33:53.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:33:54.622: INFO: rc: 1
Aug 24 05:33:54.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x943a090 exit status 1   true [0x95da030 0x95da050 0x95da070] [0x95da030 0x95da050 0x95da070] [0x95da048 0x95da068] [0x6bbb70 0x6bbb70] 0x8dac380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:34:04.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:34:05.745: INFO: rc: 1
Aug 24 05:34:05.745: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x8c60090 exit status 1   true [0x96e2028 0x96e2048 0x96e2068] [0x96e2028 0x96e2048 0x96e2068] [0x96e2040 0x96e2060] [0x6bbb70 0x6bbb70] 0x8ab2280 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:34:15.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:34:16.891: INFO: rc: 1
Aug 24 05:34:16.892: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x8c60180 exit status 1   true [0x96e2108 0x96e2128 0x96e2148] [0x96e2108 0x96e2128 0x96e2148] [0x96e2120 0x96e2140] [0x6bbb70 0x6bbb70] 0x8ab24c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:34:26.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:34:28.042: INFO: rc: 1
Aug 24 05:34:28.042: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x943a180 exit status 1   true [0x95da110 0x95da130 0x95da150] [0x95da110 0x95da130 0x95da150] [0x95da128 0x95da148] [0x6bbb70 0x6bbb70] 0x8dac640 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:34:38.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:34:39.169: INFO: rc: 1
Aug 24 05:34:39.169: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x8c60270 exit status 1   true [0x96e24f8 0x96e2518 0x96e2538] [0x96e24f8 0x96e2518 0x96e2538] [0x96e2510 0x96e2530] [0x6bbb70 0x6bbb70] 0x8ab2780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:34:49.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:34:50.303: INFO: rc: 1
Aug 24 05:34:50.304: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x7880090 exit status 1   true [0x9572028 0x9572048 0x9572068] [0x9572028 0x9572048 0x9572068] [0x9572040 0x9572060] [0x6bbb70 0x6bbb70] 0x8be64c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:35:00.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:35:01.445: INFO: rc: 1
Aug 24 05:35:01.445: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x943a2a0 exit status 1   true [0x95da258 0x95da278 0x95da298] [0x95da258 0x95da278 0x95da298] [0x95da270 0x95da290] [0x6bbb70 0x6bbb70] 0x8dac980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:35:11.446: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:35:12.587: INFO: rc: 1
Aug 24 05:35:12.588: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x89e80c0 exit status 1   true [0x7592028 0x7592048 0x7592068] [0x7592028 0x7592048 0x7592068] [0x7592040 0x7592060] [0x6bbb70 0x6bbb70] 0x8d48280 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:35:22.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:35:23.707: INFO: rc: 1
Aug 24 05:35:23.708: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x943a3c0 exit status 1   true [0x95da3a0 0x95da3c0 0x95da3e0] [0x95da3a0 0x95da3c0 0x95da3e0] [0x95da3b8 0x95da3d8] [0x6bbb70 0x6bbb70] 0x8dace80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:35:33.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:35:34.832: INFO: rc: 1
Aug 24 05:35:34.832: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x8c60390 exit status 1   true [0x96e2670 0x96e26b0 0x96e26d0] [0x96e2670 0x96e26b0 0x96e26d0] [0x96e26a8 0x96e26c8] [0x6bbb70 0x6bbb70] 0x8ab2ac0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:35:44.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:35:45.999: INFO: rc: 1
Aug 24 05:35:45.999: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x943a480 exit status 1   true [0x95da418 0x95da438 0x95da458] [0x95da418 0x95da438 0x95da458] [0x95da430 0x95da450] [0x6bbb70 0x6bbb70] 0x8dad440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:35:56.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:35:57.108: INFO: rc: 1
Aug 24 05:35:57.108: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x943a0c0 exit status 1   true [0x95da030 0x95da050 0x95da070] [0x95da030 0x95da050 0x95da070] [0x95da048 0x95da068] [0x6bbb70 0x6bbb70] 0x8dac380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:36:07.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:36:08.252: INFO: rc: 1
Aug 24 05:36:08.253: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x943a1e0 exit status 1   true [0x95da110 0x95da130 0x95da150] [0x95da110 0x95da130 0x95da150] [0x95da128 0x95da148] [0x6bbb70 0x6bbb70] 0x8dac640 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:36:18.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:36:19.410: INFO: rc: 1
Aug 24 05:36:19.411: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x8c600c0 exit status 1   true [0x96e2028 0x96e2048 0x96e2068] [0x96e2028 0x96e2048 0x96e2068] [0x96e2040 0x96e2060] [0x6bbb70 0x6bbb70] 0x8ab2280 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:36:29.412: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:36:30.554: INFO: rc: 1
Aug 24 05:36:30.555: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x8c601e0 exit status 1   true [0x96e2108 0x96e2128 0x96e2148] [0x96e2108 0x96e2128 0x96e2148] [0x96e2120 0x96e2140] [0x6bbb70 0x6bbb70] 0x8ab24c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:36:40.555: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:36:41.696: INFO: rc: 1
Aug 24 05:36:41.696: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x943a300 exit status 1   true [0x95da1f0 0x95da210 0x95da230] [0x95da1f0 0x95da210 0x95da230] [0x95da208 0x95da228] [0x6bbb70 0x6bbb70] 0x8dac980 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:36:51.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:36:52.846: INFO: rc: 1
Aug 24 05:36:52.846: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x943a3f0 exit status 1   true [0x95da268 0x95da288 0x95da2a8] [0x95da268 0x95da288 0x95da2a8] [0x95da280 0x95da2a0] [0x6bbb70 0x6bbb70] 0x8dace80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:37:02.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:37:03.975: INFO: rc: 1
Aug 24 05:37:03.975: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x89e80f0 exit status 1   true [0x75920c0 0x75920e0 0x7592110] [0x75920c0 0x75920e0 0x7592110] [0x75920d8 0x7592108] [0x6bbb70 0x6bbb70] 0x8d48280 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:37:13.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:37:15.122: INFO: rc: 1
Aug 24 05:37:15.123: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x943a540 exit status 1   true [0x95da3b0 0x95da3d0 0x95da3f0] [0x95da3b0 0x95da3d0 0x95da3f0] [0x95da3c8 0x95da3e8] [0x6bbb70 0x6bbb70] 0x8dad440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:37:25.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:37:26.276: INFO: rc: 1
Aug 24 05:37:26.277: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x78800c0 exit status 1   true [0x9572028 0x9572048 0x9572068] [0x9572028 0x9572048 0x9572068] [0x9572040 0x9572060] [0x6bbb70 0x6bbb70] 0x8be64c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:37:36.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:37:37.385: INFO: rc: 1
Aug 24 05:37:37.386: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x78801b0 exit status 1   true [0x9572110 0x9572130 0x9572150] [0x9572110 0x9572130 0x9572150] [0x9572128 0x9572148] [0x6bbb70 0x6bbb70] 0x8be6b80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:37:47.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:37:48.534: INFO: rc: 1
Aug 24 05:37:48.535: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x943a630 exit status 1   true [0x95da490 0x95da4b0 0x95da4d0] [0x95da490 0x95da4b0 0x95da4d0] [0x95da4a8 0x95da4c8] [0x6bbb70 0x6bbb70] 0x8dad8c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:37:58.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:37:59.682: INFO: rc: 1
Aug 24 05:37:59.683: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x943a090 exit status 1   true [0x95da030 0x95da050 0x95da070] [0x95da030 0x95da050 0x95da070] [0x95da048 0x95da068] [0x6bbb70 0x6bbb70] 0x8dac380 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:38:09.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:38:10.846: INFO: rc: 1
Aug 24 05:38:10.846: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x89e8090 exit status 1   true [0x7592028 0x7592048 0x7592068] [0x7592028 0x7592048 0x7592068] [0x7592040 0x7592060] [0x6bbb70 0x6bbb70] 0x8d48280 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:38:20.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:38:22.003: INFO: rc: 1
Aug 24 05:38:22.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0x89e81b0 exit status 1   true [0x7592198 0x75921c8 0x75921e8] [0x7592198 0x75921c8 0x75921e8] [0x75921c0 0x75921e0] [0x6bbb70 0x6bbb70] 0x8d48580 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 24 05:38:32.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4719 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 24 05:38:33.125: INFO: rc: 1
Aug 24 05:38:33.125: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Aug 24 05:38:33.125: INFO: Scaling statefulset ss to 0
Aug 24 05:38:33.138: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 24 05:38:33.142: INFO: Deleting all statefulset in ns statefulset-4719
Aug 24 05:38:33.145: INFO: Scaling statefulset ss to 0
Aug 24 05:38:33.157: INFO: Waiting for statefulset status.replicas updated to 0
Aug 24 05:38:33.161: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:38:33.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4719" for this suite.
Aug 24 05:38:39.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:38:39.359: INFO: namespace statefulset-4719 deletion completed in 6.166860357s

• [SLOW TEST:371.877 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:38:39.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating secret secrets-5816/secret-test-e8a20c59-154e-45ca-a99b-5b3d8a398fe2
STEP: Creating a pod to test consume secrets
Aug 24 05:38:39.473: INFO: Waiting up to 5m0s for pod "pod-configmaps-4872ffff-5d92-4364-b1ae-ec6003105571" in namespace "secrets-5816" to be "success or failure"
Aug 24 05:38:39.491: INFO: Pod "pod-configmaps-4872ffff-5d92-4364-b1ae-ec6003105571": Phase="Pending", Reason="", readiness=false. Elapsed: 17.963255ms
Aug 24 05:38:41.498: INFO: Pod "pod-configmaps-4872ffff-5d92-4364-b1ae-ec6003105571": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024460014s
Aug 24 05:38:43.519: INFO: Pod "pod-configmaps-4872ffff-5d92-4364-b1ae-ec6003105571": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045195647s
STEP: Saw pod success
Aug 24 05:38:43.519: INFO: Pod "pod-configmaps-4872ffff-5d92-4364-b1ae-ec6003105571" satisfied condition "success or failure"
Aug 24 05:38:43.540: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-4872ffff-5d92-4364-b1ae-ec6003105571 container env-test: 
STEP: delete the pod
Aug 24 05:38:43.583: INFO: Waiting for pod pod-configmaps-4872ffff-5d92-4364-b1ae-ec6003105571 to disappear
Aug 24 05:38:43.599: INFO: Pod pod-configmaps-4872ffff-5d92-4364-b1ae-ec6003105571 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:38:43.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5816" for this suite.
Aug 24 05:38:49.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:38:49.813: INFO: namespace secrets-5816 deletion completed in 6.203407857s

• [SLOW TEST:10.451 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:38:49.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Aug 24 05:38:49.938: INFO: Waiting up to 5m0s for pod "client-containers-b4a79da3-ec1f-4cae-9c2d-8845ed35f9c4" in namespace "containers-7018" to be "success or failure"
Aug 24 05:38:50.049: INFO: Pod "client-containers-b4a79da3-ec1f-4cae-9c2d-8845ed35f9c4": Phase="Pending", Reason="", readiness=false. Elapsed: 110.361583ms
Aug 24 05:38:52.061: INFO: Pod "client-containers-b4a79da3-ec1f-4cae-9c2d-8845ed35f9c4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.123012017s
Aug 24 05:38:54.068: INFO: Pod "client-containers-b4a79da3-ec1f-4cae-9c2d-8845ed35f9c4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129444422s
STEP: Saw pod success
Aug 24 05:38:54.068: INFO: Pod "client-containers-b4a79da3-ec1f-4cae-9c2d-8845ed35f9c4" satisfied condition "success or failure"
Aug 24 05:38:54.073: INFO: Trying to get logs from node iruya-worker pod client-containers-b4a79da3-ec1f-4cae-9c2d-8845ed35f9c4 container test-container: 
STEP: delete the pod
Aug 24 05:38:54.186: INFO: Waiting for pod client-containers-b4a79da3-ec1f-4cae-9c2d-8845ed35f9c4 to disappear
Aug 24 05:38:54.271: INFO: Pod client-containers-b4a79da3-ec1f-4cae-9c2d-8845ed35f9c4 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:38:54.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7018" for this suite.
Aug 24 05:39:00.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:39:00.493: INFO: namespace containers-7018 deletion completed in 6.211419627s

• [SLOW TEST:10.679 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:39:00.497: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Aug 24 05:39:00.644: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 24 05:39:09.697: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:39:09.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9071" for this suite.
Aug 24 05:39:15.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:39:15.927: INFO: namespace pods-9071 deletion completed in 6.214775806s

• [SLOW TEST:15.431 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be submitted and removed [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:39:15.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 05:39:16.040: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8396c597-4940-4c19-81b3-376999e5a82c" in namespace "downward-api-5610" to be "success or failure"
Aug 24 05:39:16.100: INFO: Pod "downwardapi-volume-8396c597-4940-4c19-81b3-376999e5a82c": Phase="Pending", Reason="", readiness=false. Elapsed: 60.080232ms
Aug 24 05:39:18.110: INFO: Pod "downwardapi-volume-8396c597-4940-4c19-81b3-376999e5a82c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070293046s
Aug 24 05:39:20.116: INFO: Pod "downwardapi-volume-8396c597-4940-4c19-81b3-376999e5a82c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075866378s
STEP: Saw pod success
Aug 24 05:39:20.116: INFO: Pod "downwardapi-volume-8396c597-4940-4c19-81b3-376999e5a82c" satisfied condition "success or failure"
Aug 24 05:39:20.120: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8396c597-4940-4c19-81b3-376999e5a82c container client-container: 
STEP: delete the pod
Aug 24 05:39:20.184: INFO: Waiting for pod downwardapi-volume-8396c597-4940-4c19-81b3-376999e5a82c to disappear
Aug 24 05:39:20.217: INFO: Pod downwardapi-volume-8396c597-4940-4c19-81b3-376999e5a82c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:39:20.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5610" for this suite.
Aug 24 05:39:26.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:39:26.439: INFO: namespace downward-api-5610 deletion completed in 6.210306965s

• [SLOW TEST:10.509 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:39:26.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Aug 24 05:39:34.628: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:39:34.675: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:39:36.675: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:39:36.685: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:39:38.676: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:39:38.684: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:39:40.676: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:39:40.683: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:39:42.675: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:39:42.683: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:39:44.675: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:39:44.682: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:39:46.676: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:39:46.684: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:39:48.676: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:39:48.683: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:39:50.675: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:39:50.681: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:39:52.676: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:39:52.682: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:39:54.676: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:39:54.694: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:39:56.676: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:39:56.684: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:39:58.676: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:39:58.684: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:40:00.676: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:40:00.683: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:40:02.676: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:40:02.684: INFO: Pod pod-with-poststart-exec-hook still exists
Aug 24 05:40:04.676: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Aug 24 05:40:04.682: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:40:04.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-241" for this suite.
Aug 24 05:40:18.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:40:18.873: INFO: namespace container-lifecycle-hook-241 deletion completed in 14.178715986s

• [SLOW TEST:52.434 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:40:18.876: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7442
[It] Should recreate evicted statefulset [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-7442
STEP: Creating statefulset with conflicting port in namespace statefulset-7442
STEP: Waiting until pod test-pod will start running in namespace statefulset-7442
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7442
Aug 24 05:40:25.058: INFO: Observed stateful pod in namespace: statefulset-7442, name: ss-0, uid: f3a03638-31f8-49d9-88cb-f0993df8968f, status phase: Pending. Waiting for statefulset controller to delete.
Aug 24 05:40:25.201: INFO: Observed stateful pod in namespace: statefulset-7442, name: ss-0, uid: f3a03638-31f8-49d9-88cb-f0993df8968f, status phase: Failed. Waiting for statefulset controller to delete.
Aug 24 05:40:25.239: INFO: Observed stateful pod in namespace: statefulset-7442, name: ss-0, uid: f3a03638-31f8-49d9-88cb-f0993df8968f, status phase: Failed. Waiting for statefulset controller to delete.
Aug 24 05:40:25.250: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7442
STEP: Removing pod with conflicting port in namespace statefulset-7442
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7442 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 24 05:40:31.347: INFO: Deleting all statefulset in ns statefulset-7442
Aug 24 05:40:31.353: INFO: Scaling statefulset ss to 0
Aug 24 05:40:51.380: INFO: Waiting for statefulset status.replicas updated to 0
Aug 24 05:40:51.386: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:40:51.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7442" for this suite.
Aug 24 05:40:57.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:40:57.576: INFO: namespace statefulset-7442 deletion completed in 6.157227534s

• [SLOW TEST:38.700 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Should recreate evicted statefulset [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:40:57.581: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 24 05:41:02.264: INFO: Successfully updated pod "labelsupdate6c423d4f-0bef-4fda-8589-5dace10956cd"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:41:06.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2426" for this suite.
Aug 24 05:41:28.332: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:41:28.474: INFO: namespace downward-api-2426 deletion completed in 22.167730763s

• [SLOW TEST:30.894 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:41:28.475: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 24 05:41:28.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6034'
Aug 24 05:41:29.764: INFO: stderr: ""
Aug 24 05:41:29.764: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 24 05:41:29.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6034'
Aug 24 05:41:32.022: INFO: stderr: ""
Aug 24 05:41:32.023: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:41:32.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6034" for this suite.
Aug 24 05:41:38.228: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:41:38.367: INFO: namespace kubectl-6034 deletion completed in 6.308696433s

• [SLOW TEST:9.892 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:41:38.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 24 05:41:42.569: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 24 05:41:53.630: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:41:53.636: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1527" for this suite.
Aug 24 05:41:59.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:41:59.817: INFO: namespace pods-1527 deletion completed in 6.1698353s

• [SLOW TEST:21.448 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:41:59.820: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Aug 24 05:41:59.933: INFO: Waiting up to 5m0s for pod "var-expansion-d6823e23-7332-423c-a6d7-1dee78916042" in namespace "var-expansion-2554" to be "success or failure"
Aug 24 05:41:59.956: INFO: Pod "var-expansion-d6823e23-7332-423c-a6d7-1dee78916042": Phase="Pending", Reason="", readiness=false. Elapsed: 22.257321ms
Aug 24 05:42:01.964: INFO: Pod "var-expansion-d6823e23-7332-423c-a6d7-1dee78916042": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030356004s
Aug 24 05:42:03.972: INFO: Pod "var-expansion-d6823e23-7332-423c-a6d7-1dee78916042": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038824023s
STEP: Saw pod success
Aug 24 05:42:03.973: INFO: Pod "var-expansion-d6823e23-7332-423c-a6d7-1dee78916042" satisfied condition "success or failure"
Aug 24 05:42:03.994: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-d6823e23-7332-423c-a6d7-1dee78916042 container dapi-container: 
STEP: delete the pod
Aug 24 05:42:04.024: INFO: Waiting for pod var-expansion-d6823e23-7332-423c-a6d7-1dee78916042 to disappear
Aug 24 05:42:04.063: INFO: Pod var-expansion-d6823e23-7332-423c-a6d7-1dee78916042 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:42:04.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-2554" for this suite.
Aug 24 05:42:10.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:42:10.223: INFO: namespace var-expansion-2554 deletion completed in 6.149148396s

• [SLOW TEST:10.404 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:42:10.226: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-9478
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-9478
STEP: Deleting pre-stop pod
Aug 24 05:42:23.410: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:42:23.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-9478" for this suite.
Aug 24 05:43:01.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:43:01.666: INFO: namespace prestop-9478 deletion completed in 38.221852814s

• [SLOW TEST:51.440 seconds]
[k8s.io] [sig-node] PreStop
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:43:01.669: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0824 05:43:11.892471       7 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 24 05:43:11.892: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:43:11.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9983" for this suite.
Aug 24 05:43:17.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:43:18.084: INFO: namespace gc-9983 deletion completed in 6.182106226s

• [SLOW TEST:16.416 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:43:18.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Aug 24 05:43:18.213: INFO: Waiting up to 5m0s for pod "var-expansion-66d6b557-4c31-4129-af24-0d983da1e3a8" in namespace "var-expansion-8729" to be "success or failure"
Aug 24 05:43:18.232: INFO: Pod "var-expansion-66d6b557-4c31-4129-af24-0d983da1e3a8": Phase="Pending", Reason="", readiness=false. Elapsed: 17.918443ms
Aug 24 05:43:20.239: INFO: Pod "var-expansion-66d6b557-4c31-4129-af24-0d983da1e3a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025233944s
Aug 24 05:43:22.247: INFO: Pod "var-expansion-66d6b557-4c31-4129-af24-0d983da1e3a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033037496s
STEP: Saw pod success
Aug 24 05:43:22.247: INFO: Pod "var-expansion-66d6b557-4c31-4129-af24-0d983da1e3a8" satisfied condition "success or failure"
Aug 24 05:43:22.252: INFO: Trying to get logs from node iruya-worker pod var-expansion-66d6b557-4c31-4129-af24-0d983da1e3a8 container dapi-container: 
STEP: delete the pod
Aug 24 05:43:22.302: INFO: Waiting for pod var-expansion-66d6b557-4c31-4129-af24-0d983da1e3a8 to disappear
Aug 24 05:43:22.313: INFO: Pod var-expansion-66d6b557-4c31-4129-af24-0d983da1e3a8 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:43:22.314: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8729" for this suite.
Aug 24 05:43:28.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:43:28.468: INFO: namespace var-expansion-8729 deletion completed in 6.146436505s

• [SLOW TEST:10.383 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:43:28.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-9536, will wait for the garbage collector to delete the pods
Aug 24 05:43:34.645: INFO: Deleting Job.batch foo took: 7.273826ms
Aug 24 05:43:34.946: INFO: Terminating Job.batch foo pods took: 300.8775ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:44:13.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9536" for this suite.
Aug 24 05:44:19.702: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:44:19.844: INFO: namespace job-9536 deletion completed in 6.180572617s

• [SLOW TEST:51.374 seconds]
[sig-apps] Job
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:44:19.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-2460
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 24 05:44:19.924: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 24 05:44:50.152: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.73:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2460 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 05:44:50.152: INFO: >>> kubeConfig: /root/.kube/config
I0824 05:44:50.262328       7 log.go:172] (0x85d8850) (0x85d8930) Create stream
I0824 05:44:50.262547       7 log.go:172] (0x85d8850) (0x85d8930) Stream added, broadcasting: 1
I0824 05:44:50.267700       7 log.go:172] (0x85d8850) Reply frame received for 1
I0824 05:44:50.267943       7 log.go:172] (0x85d8850) (0x85d8a10) Create stream
I0824 05:44:50.268076       7 log.go:172] (0x85d8850) (0x85d8a10) Stream added, broadcasting: 3
I0824 05:44:50.270293       7 log.go:172] (0x85d8850) Reply frame received for 3
I0824 05:44:50.270422       7 log.go:172] (0x85d8850) (0x8e14000) Create stream
I0824 05:44:50.270490       7 log.go:172] (0x85d8850) (0x8e14000) Stream added, broadcasting: 5
I0824 05:44:50.272023       7 log.go:172] (0x85d8850) Reply frame received for 5
I0824 05:44:50.370464       7 log.go:172] (0x85d8850) Data frame received for 3
I0824 05:44:50.370699       7 log.go:172] (0x85d8a10) (3) Data frame handling
I0824 05:44:50.370890       7 log.go:172] (0x85d8850) Data frame received for 5
I0824 05:44:50.371077       7 log.go:172] (0x8e14000) (5) Data frame handling
I0824 05:44:50.371177       7 log.go:172] (0x85d8a10) (3) Data frame sent
I0824 05:44:50.371291       7 log.go:172] (0x85d8850) Data frame received for 3
I0824 05:44:50.371384       7 log.go:172] (0x85d8a10) (3) Data frame handling
I0824 05:44:50.372440       7 log.go:172] (0x85d8850) Data frame received for 1
I0824 05:44:50.372602       7 log.go:172] (0x85d8930) (1) Data frame handling
I0824 05:44:50.372910       7 log.go:172] (0x85d8930) (1) Data frame sent
I0824 05:44:50.373041       7 log.go:172] (0x85d8850) (0x85d8930) Stream removed, broadcasting: 1
I0824 05:44:50.373212       7 log.go:172] (0x85d8850) Go away received
I0824 05:44:50.373528       7 log.go:172] (0x85d8850) (0x85d8930) Stream removed, broadcasting: 1
I0824 05:44:50.373653       7 log.go:172] (0x85d8850) (0x85d8a10) Stream removed, broadcasting: 3
I0824 05:44:50.373756       7 log.go:172] (0x85d8850) (0x8e14000) Stream removed, broadcasting: 5
Aug 24 05:44:50.373: INFO: Found all expected endpoints: [netserver-0]
Aug 24 05:44:50.379: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.32:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2460 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 24 05:44:50.379: INFO: >>> kubeConfig: /root/.kube/config
I0824 05:44:50.484550       7 log.go:172] (0x85d8e70) (0x85d8ee0) Create stream
I0824 05:44:50.484712       7 log.go:172] (0x85d8e70) (0x85d8ee0) Stream added, broadcasting: 1
I0824 05:44:50.489687       7 log.go:172] (0x85d8e70) Reply frame received for 1
I0824 05:44:50.489938       7 log.go:172] (0x85d8e70) (0x85d8f50) Create stream
I0824 05:44:50.490051       7 log.go:172] (0x85d8e70) (0x85d8f50) Stream added, broadcasting: 3
I0824 05:44:50.492948       7 log.go:172] (0x85d8e70) Reply frame received for 3
I0824 05:44:50.493156       7 log.go:172] (0x85d8e70) (0x85d8fc0) Create stream
I0824 05:44:50.493246       7 log.go:172] (0x85d8e70) (0x85d8fc0) Stream added, broadcasting: 5
I0824 05:44:50.494612       7 log.go:172] (0x85d8e70) Reply frame received for 5
I0824 05:44:50.561505       7 log.go:172] (0x85d8e70) Data frame received for 3
I0824 05:44:50.561764       7 log.go:172] (0x85d8f50) (3) Data frame handling
I0824 05:44:50.561969       7 log.go:172] (0x85d8e70) Data frame received for 5
I0824 05:44:50.562262       7 log.go:172] (0x85d8f50) (3) Data frame sent
I0824 05:44:50.562386       7 log.go:172] (0x85d8e70) Data frame received for 3
I0824 05:44:50.562567       7 log.go:172] (0x85d8fc0) (5) Data frame handling
I0824 05:44:50.562734       7 log.go:172] (0x85d8e70) Data frame received for 1
I0824 05:44:50.562857       7 log.go:172] (0x85d8ee0) (1) Data frame handling
I0824 05:44:50.562972       7 log.go:172] (0x85d8ee0) (1) Data frame sent
I0824 05:44:50.563142       7 log.go:172] (0x85d8e70) (0x85d8ee0) Stream removed, broadcasting: 1
I0824 05:44:50.563311       7 log.go:172] (0x85d8f50) (3) Data frame handling
I0824 05:44:50.563472       7 log.go:172] (0x85d8e70) Go away received
I0824 05:44:50.563683       7 log.go:172] (0x85d8e70) (0x85d8ee0) Stream removed, broadcasting: 1
I0824 05:44:50.563822       7 log.go:172] (0x85d8e70) (0x85d8f50) Stream removed, broadcasting: 3
I0824 05:44:50.563952       7 log.go:172] (0x85d8e70) (0x85d8fc0) Stream removed, broadcasting: 5
Aug 24 05:44:50.564: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:44:50.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-2460" for this suite.
Aug 24 05:45:14.596: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:45:14.737: INFO: namespace pod-network-test-2460 deletion completed in 24.161931168s

• [SLOW TEST:54.889 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:45:14.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Aug 24 05:45:14.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3296'
Aug 24 05:45:20.094: INFO: stderr: ""
Aug 24 05:45:20.094: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Aug 24 05:45:20.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3296'
Aug 24 05:45:21.239: INFO: stderr: ""
Aug 24 05:45:21.239: INFO: stdout: "update-demo-nautilus-6dw6v update-demo-nautilus-92wtz "
Aug 24 05:45:21.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6dw6v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3296'
Aug 24 05:45:22.514: INFO: stderr: ""
Aug 24 05:45:22.515: INFO: stdout: ""
Aug 24 05:45:22.515: INFO: update-demo-nautilus-6dw6v is created but not running
Aug 24 05:45:27.516: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3296'
Aug 24 05:45:28.695: INFO: stderr: ""
Aug 24 05:45:28.695: INFO: stdout: "update-demo-nautilus-6dw6v update-demo-nautilus-92wtz "
Aug 24 05:45:28.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6dw6v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3296'
Aug 24 05:45:29.858: INFO: stderr: ""
Aug 24 05:45:29.858: INFO: stdout: "true"
Aug 24 05:45:29.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6dw6v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3296'
Aug 24 05:45:31.036: INFO: stderr: ""
Aug 24 05:45:31.036: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 24 05:45:31.036: INFO: validating pod update-demo-nautilus-6dw6v
Aug 24 05:45:31.042: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 24 05:45:31.042: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 24 05:45:31.042: INFO: update-demo-nautilus-6dw6v is verified up and running
Aug 24 05:45:31.042: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92wtz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3296'
Aug 24 05:45:32.216: INFO: stderr: ""
Aug 24 05:45:32.217: INFO: stdout: "true"
Aug 24 05:45:32.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-92wtz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3296'
Aug 24 05:45:33.403: INFO: stderr: ""
Aug 24 05:45:33.403: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Aug 24 05:45:33.403: INFO: validating pod update-demo-nautilus-92wtz
Aug 24 05:45:33.411: INFO: got data: {
  "image": "nautilus.jpg"
}

Aug 24 05:45:33.411: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Aug 24 05:45:33.411: INFO: update-demo-nautilus-92wtz is verified up and running
STEP: using delete to clean up resources
Aug 24 05:45:33.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3296'
Aug 24 05:45:34.532: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 24 05:45:34.533: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Aug 24 05:45:34.533: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3296'
Aug 24 05:45:35.715: INFO: stderr: "No resources found.\n"
Aug 24 05:45:35.716: INFO: stdout: ""
Aug 24 05:45:35.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3296 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Aug 24 05:45:36.877: INFO: stderr: ""
Aug 24 05:45:36.877: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:45:36.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3296" for this suite.
Aug 24 05:45:58.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:45:59.057: INFO: namespace kubectl-3296 deletion completed in 22.169303901s

• [SLOW TEST:44.317 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:45:59.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Aug 24 05:45:59.131: INFO: namespace kubectl-6647
Aug 24 05:45:59.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6647'
Aug 24 05:46:00.723: INFO: stderr: ""
Aug 24 05:46:00.724: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 24 05:46:01.732: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 05:46:01.732: INFO: Found 0 / 1
Aug 24 05:46:02.873: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 05:46:02.873: INFO: Found 0 / 1
Aug 24 05:46:03.732: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 05:46:03.732: INFO: Found 0 / 1
Aug 24 05:46:04.731: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 05:46:04.731: INFO: Found 1 / 1
Aug 24 05:46:04.731: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 24 05:46:04.737: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 05:46:04.737: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 24 05:46:04.737: INFO: wait on redis-master startup in kubectl-6647 
Aug 24 05:46:04.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-b722g redis-master --namespace=kubectl-6647'
Aug 24 05:46:05.883: INFO: stderr: ""
Aug 24 05:46:05.883: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 24 Aug 05:46:03.798 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 24 Aug 05:46:03.799 # Server started, Redis version 3.2.12\n1:M 24 Aug 05:46:03.799 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 24 Aug 05:46:03.799 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Aug 24 05:46:05.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6647'
Aug 24 05:46:07.112: INFO: stderr: ""
Aug 24 05:46:07.112: INFO: stdout: "service/rm2 exposed\n"
Aug 24 05:46:07.165: INFO: Service rm2 in namespace kubectl-6647 found.
STEP: exposing service
Aug 24 05:46:09.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6647'
Aug 24 05:46:10.399: INFO: stderr: ""
Aug 24 05:46:10.400: INFO: stdout: "service/rm3 exposed\n"
Aug 24 05:46:10.409: INFO: Service rm3 in namespace kubectl-6647 found.
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:46:12.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6647" for this suite.
Aug 24 05:46:34.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:46:34.584: INFO: namespace kubectl-6647 deletion completed in 22.150300231s

• [SLOW TEST:35.523 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:46:34.585: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 24 05:46:34.689: INFO: Waiting up to 5m0s for pod "downward-api-098bab09-e943-4eb6-892e-e0da54b2a11e" in namespace "downward-api-6239" to be "success or failure"
Aug 24 05:46:34.700: INFO: Pod "downward-api-098bab09-e943-4eb6-892e-e0da54b2a11e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.983903ms
Aug 24 05:46:36.731: INFO: Pod "downward-api-098bab09-e943-4eb6-892e-e0da54b2a11e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041439614s
Aug 24 05:46:38.739: INFO: Pod "downward-api-098bab09-e943-4eb6-892e-e0da54b2a11e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0494536s
STEP: Saw pod success
Aug 24 05:46:38.739: INFO: Pod "downward-api-098bab09-e943-4eb6-892e-e0da54b2a11e" satisfied condition "success or failure"
Aug 24 05:46:38.746: INFO: Trying to get logs from node iruya-worker pod downward-api-098bab09-e943-4eb6-892e-e0da54b2a11e container dapi-container: 
STEP: delete the pod
Aug 24 05:46:38.783: INFO: Waiting for pod downward-api-098bab09-e943-4eb6-892e-e0da54b2a11e to disappear
Aug 24 05:46:38.812: INFO: Pod downward-api-098bab09-e943-4eb6-892e-e0da54b2a11e no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:46:38.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6239" for this suite.
Aug 24 05:46:44.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:46:44.995: INFO: namespace downward-api-6239 deletion completed in 6.172307111s

• [SLOW TEST:10.410 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:46:44.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 24 05:46:45.133: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8252,SelfLink:/api/v1/namespaces/watch-8252/configmaps/e2e-watch-test-resource-version,UID:e1464750-4487-485f-bd47-79b36d5e8f5d,ResourceVersion:2302596,Generation:0,CreationTimestamp:2020-08-24 05:46:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 24 05:46:45.135: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-8252,SelfLink:/api/v1/namespaces/watch-8252/configmaps/e2e-watch-test-resource-version,UID:e1464750-4487-485f-bd47-79b36d5e8f5d,ResourceVersion:2302597,Generation:0,CreationTimestamp:2020-08-24 05:46:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:46:45.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8252" for this suite.
Aug 24 05:46:51.159: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:46:51.301: INFO: namespace watch-8252 deletion completed in 6.156432864s

• [SLOW TEST:6.305 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:46:51.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-8377
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8377 to expose endpoints map[]
Aug 24 05:46:51.460: INFO: successfully validated that service endpoint-test2 in namespace services-8377 exposes endpoints map[] (12.977397ms elapsed)
STEP: Creating pod pod1 in namespace services-8377
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8377 to expose endpoints map[pod1:[80]]
Aug 24 05:46:55.578: INFO: successfully validated that service endpoint-test2 in namespace services-8377 exposes endpoints map[pod1:[80]] (4.109496269s elapsed)
STEP: Creating pod pod2 in namespace services-8377
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8377 to expose endpoints map[pod1:[80] pod2:[80]]
Aug 24 05:46:59.710: INFO: successfully validated that service endpoint-test2 in namespace services-8377 exposes endpoints map[pod1:[80] pod2:[80]] (4.123852514s elapsed)
STEP: Deleting pod pod1 in namespace services-8377
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8377 to expose endpoints map[pod2:[80]]
Aug 24 05:46:59.763: INFO: successfully validated that service endpoint-test2 in namespace services-8377 exposes endpoints map[pod2:[80]] (45.45527ms elapsed)
STEP: Deleting pod pod2 in namespace services-8377
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8377 to expose endpoints map[]
Aug 24 05:46:59.849: INFO: successfully validated that service endpoint-test2 in namespace services-8377 exposes endpoints map[] (79.665991ms elapsed)
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:47:00.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8377" for this suite.
Aug 24 05:47:22.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:47:22.557: INFO: namespace services-8377 deletion completed in 22.27318145s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:31.255 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:47:22.558: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Aug 24 05:47:22.642: INFO: Waiting up to 5m0s for pod "client-containers-a48ab273-cf4a-4c44-a258-1ea0b95a738c" in namespace "containers-8424" to be "success or failure"
Aug 24 05:47:22.700: INFO: Pod "client-containers-a48ab273-cf4a-4c44-a258-1ea0b95a738c": Phase="Pending", Reason="", readiness=false. Elapsed: 58.191641ms
Aug 24 05:47:24.706: INFO: Pod "client-containers-a48ab273-cf4a-4c44-a258-1ea0b95a738c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064361997s
Aug 24 05:47:26.719: INFO: Pod "client-containers-a48ab273-cf4a-4c44-a258-1ea0b95a738c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.077348706s
STEP: Saw pod success
Aug 24 05:47:26.719: INFO: Pod "client-containers-a48ab273-cf4a-4c44-a258-1ea0b95a738c" satisfied condition "success or failure"
Aug 24 05:47:26.725: INFO: Trying to get logs from node iruya-worker pod client-containers-a48ab273-cf4a-4c44-a258-1ea0b95a738c container test-container: 
STEP: delete the pod
Aug 24 05:47:26.756: INFO: Waiting for pod client-containers-a48ab273-cf4a-4c44-a258-1ea0b95a738c to disappear
Aug 24 05:47:26.784: INFO: Pod client-containers-a48ab273-cf4a-4c44-a258-1ea0b95a738c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:47:26.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8424" for this suite.
Aug 24 05:47:32.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:47:33.007: INFO: namespace containers-8424 deletion completed in 6.193700831s

• [SLOW TEST:10.449 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:47:33.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8318.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8318.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 24 05:47:39.242: INFO: DNS probes using dns-8318/dns-test-5232b2cd-5426-47e6-a200-088e7d640a25 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:47:39.270: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8318" for this suite.
Aug 24 05:47:45.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:47:45.488: INFO: namespace dns-8318 deletion completed in 6.19465721s

• [SLOW TEST:12.477 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:47:45.495: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-11d13021-45a6-471a-8248-552c963cf75b
STEP: Creating a pod to test consume configMaps
Aug 24 05:47:45.594: INFO: Waiting up to 5m0s for pod "pod-configmaps-f8350071-4816-44db-a8d1-ef3f6bde4bf4" in namespace "configmap-1781" to be "success or failure"
Aug 24 05:47:45.610: INFO: Pod "pod-configmaps-f8350071-4816-44db-a8d1-ef3f6bde4bf4": Phase="Pending", Reason="", readiness=false. Elapsed: 15.830428ms
Aug 24 05:47:47.618: INFO: Pod "pod-configmaps-f8350071-4816-44db-a8d1-ef3f6bde4bf4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023369601s
Aug 24 05:47:49.624: INFO: Pod "pod-configmaps-f8350071-4816-44db-a8d1-ef3f6bde4bf4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029892357s
STEP: Saw pod success
Aug 24 05:47:49.625: INFO: Pod "pod-configmaps-f8350071-4816-44db-a8d1-ef3f6bde4bf4" satisfied condition "success or failure"
Aug 24 05:47:49.628: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-f8350071-4816-44db-a8d1-ef3f6bde4bf4 container configmap-volume-test: 
STEP: delete the pod
Aug 24 05:47:49.673: INFO: Waiting for pod pod-configmaps-f8350071-4816-44db-a8d1-ef3f6bde4bf4 to disappear
Aug 24 05:47:49.687: INFO: Pod pod-configmaps-f8350071-4816-44db-a8d1-ef3f6bde4bf4 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:47:49.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1781" for this suite.
Aug 24 05:47:55.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:47:55.866: INFO: namespace configmap-1781 deletion completed in 6.166686504s

• [SLOW TEST:10.371 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:47:55.871: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Aug 24 05:48:02.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-837360b0-9850-4b14-ad49-84a739be5609 -c busybox-main-container --namespace=emptydir-3737 -- cat /usr/share/volumeshare/shareddata.txt'
Aug 24 05:48:03.523: INFO: stderr: "I0824 05:48:03.375499    4224 log.go:172] (0x29058f0) (0x2905960) Create stream\nI0824 05:48:03.377478    4224 log.go:172] (0x29058f0) (0x2905960) Stream added, broadcasting: 1\nI0824 05:48:03.392364    4224 log.go:172] (0x29058f0) Reply frame received for 1\nI0824 05:48:03.392841    4224 log.go:172] (0x29058f0) (0x2b80000) Create stream\nI0824 05:48:03.392904    4224 log.go:172] (0x29058f0) (0x2b80000) Stream added, broadcasting: 3\nI0824 05:48:03.393959    4224 log.go:172] (0x29058f0) Reply frame received for 3\nI0824 05:48:03.394181    4224 log.go:172] (0x29058f0) (0x24ac9a0) Create stream\nI0824 05:48:03.394250    4224 log.go:172] (0x29058f0) (0x24ac9a0) Stream added, broadcasting: 5\nI0824 05:48:03.395161    4224 log.go:172] (0x29058f0) Reply frame received for 5\nI0824 05:48:03.498219    4224 log.go:172] (0x29058f0) Data frame received for 3\nI0824 05:48:03.498644    4224 log.go:172] (0x29058f0) Data frame received for 5\nI0824 05:48:03.498827    4224 log.go:172] (0x29058f0) Data frame received for 1\nI0824 05:48:03.498921    4224 log.go:172] (0x2905960) (1) Data frame handling\nI0824 05:48:03.499114    4224 log.go:172] (0x24ac9a0) (5) Data frame handling\nI0824 05:48:03.499553    4224 log.go:172] (0x2b80000) (3) Data frame handling\nI0824 05:48:03.499947    4224 log.go:172] (0x2905960) (1) Data frame sent\nI0824 05:48:03.500308    4224 log.go:172] (0x2b80000) (3) Data frame sent\nI0824 05:48:03.500492    4224 log.go:172] (0x29058f0) Data frame received for 3\nI0824 05:48:03.500644    4224 log.go:172] (0x2b80000) (3) Data frame handling\nI0824 05:48:03.502109    4224 log.go:172] (0x29058f0) (0x2905960) Stream removed, broadcasting: 1\nI0824 05:48:03.504224    4224 log.go:172] (0x29058f0) Go away received\nI0824 05:48:03.508661    4224 log.go:172] (0x29058f0) (0x2905960) Stream removed, broadcasting: 1\nI0824 05:48:03.509030    4224 log.go:172] (0x29058f0) (0x2b80000) Stream removed, broadcasting: 3\nI0824 05:48:03.509260    4224 log.go:172] (0x29058f0) (0x24ac9a0) Stream removed, broadcasting: 5\n"
Aug 24 05:48:03.524: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:48:03.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3737" for this suite.
Aug 24 05:48:09.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:48:09.683: INFO: namespace emptydir-3737 deletion completed in 6.147198404s

• [SLOW TEST:13.812 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:48:09.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:48:13.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9212" for this suite.
Aug 24 05:48:53.845: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:48:53.995: INFO: namespace kubelet-test-9212 deletion completed in 40.171037793s

• [SLOW TEST:44.310 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:48:53.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 05:48:54.116: INFO: Pod name rollover-pod: Found 0 pods out of 1
Aug 24 05:48:59.123: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Aug 24 05:48:59.124: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Aug 24 05:49:01.133: INFO: Creating deployment "test-rollover-deployment"
Aug 24 05:49:01.151: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Aug 24 05:49:03.165: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Aug 24 05:49:03.180: INFO: Ensure that both replica sets have 1 created replica
Aug 24 05:49:03.190: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Aug 24 05:49:03.200: INFO: Updating deployment test-rollover-deployment
Aug 24 05:49:03.200: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Aug 24 05:49:05.227: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Aug 24 05:49:05.241: INFO: Make sure deployment "test-rollover-deployment" is complete
Aug 24 05:49:05.252: INFO: all replica sets need to contain the pod-template-hash label
Aug 24 05:49:05.252: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844943, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 24 05:49:07.268: INFO: all replica sets need to contain the pod-template-hash label
Aug 24 05:49:07.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844947, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 24 05:49:09.268: INFO: all replica sets need to contain the pod-template-hash label
Aug 24 05:49:09.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844947, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 24 05:49:11.269: INFO: all replica sets need to contain the pod-template-hash label
Aug 24 05:49:11.270: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844947, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 24 05:49:13.268: INFO: all replica sets need to contain the pod-template-hash label
Aug 24 05:49:13.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844947, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 24 05:49:15.268: INFO: all replica sets need to contain the pod-template-hash label
Aug 24 05:49:15.269: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844947, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 24 05:49:17.342: INFO: 
Aug 24 05:49:17.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844957, loc:(*time.Location)(0x67985e0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733844941, loc:(*time.Location)(0x67985e0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 24 05:49:19.264: INFO: 
Aug 24 05:49:19.265: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Aug 24 05:49:19.277: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-2226,SelfLink:/apis/apps/v1/namespaces/deployment-2226/deployments/test-rollover-deployment,UID:0165bded-f2bb-446f-929c-e4d73ea58659,ResourceVersion:2303173,Generation:2,CreationTimestamp:2020-08-24 05:49:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-24 05:49:01 +0000 UTC 2020-08-24 05:49:01 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-24 05:49:17 +0000 UTC 2020-08-24 05:49:01 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Aug 24 05:49:19.284: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-2226,SelfLink:/apis/apps/v1/namespaces/deployment-2226/replicasets/test-rollover-deployment-854595fc44,UID:6eba1336-8d10-49c6-8e3a-576d860524a0,ResourceVersion:2303162,Generation:2,CreationTimestamp:2020-08-24 05:49:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 0165bded-f2bb-446f-929c-e4d73ea58659 0x8f13ad7 0x8f13ad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Aug 24 05:49:19.284: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Aug 24 05:49:19.285: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-2226,SelfLink:/apis/apps/v1/namespaces/deployment-2226/replicasets/test-rollover-controller,UID:e52ccb43-7f11-4933-9ee9-2c9d8a7fe2f5,ResourceVersion:2303172,Generation:2,CreationTimestamp:2020-08-24 05:48:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 0165bded-f2bb-446f-929c-e4d73ea58659 0x8f13827 0x8f13828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 24 05:49:19.286: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-2226,SelfLink:/apis/apps/v1/namespaces/deployment-2226/replicasets/test-rollover-deployment-9b8b997cf,UID:b04f8885-c087-4139-b32a-20149e2affa0,ResourceVersion:2303125,Generation:2,CreationTimestamp:2020-08-24 05:49:01 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 0165bded-f2bb-446f-929c-e4d73ea58659 0x8f13ea0 0x8f13ea1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Aug 24 05:49:19.294: INFO: Pod "test-rollover-deployment-854595fc44-c2znk" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-c2znk,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-2226,SelfLink:/api/v1/namespaces/deployment-2226/pods/test-rollover-deployment-854595fc44-c2znk,UID:3b0dfd7f-b8bd-4e16-a5a3-8aa18dae6271,ResourceVersion:2303140,Generation:0,CreationTimestamp:2020-08-24 05:49:03 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 6eba1336-8d10-49c6-8e3a-576d860524a0 0x85f25b7 0x85f25b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s6hns {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s6hns,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-s6hns true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0x85f2630} {node.kubernetes.io/unreachable Exists  NoExecute 0x85f2650}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:49:03 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:49:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:49:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-24 05:49:03 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.82,StartTime:2020-08-24 05:49:03 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-24 05:49:06 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://24dc0b591c81579cb1aa405ee164eb6c03d54550cd0ad1dde6eca01406cdad56}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:49:19.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2226" for this suite.
Aug 24 05:49:25.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:49:25.453: INFO: namespace deployment-2226 deletion completed in 6.150480885s

• [SLOW TEST:31.457 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:49:25.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 24 05:49:25.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4512'
Aug 24 05:49:27.126: INFO: stderr: ""
Aug 24 05:49:27.126: INFO: stdout: "replicationcontroller/redis-master created\n"
Aug 24 05:49:27.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4512'
Aug 24 05:49:28.761: INFO: stderr: ""
Aug 24 05:49:28.761: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Aug 24 05:49:29.769: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 05:49:29.769: INFO: Found 0 / 1
Aug 24 05:49:30.767: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 05:49:30.768: INFO: Found 0 / 1
Aug 24 05:49:31.769: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 05:49:31.769: INFO: Found 1 / 1
Aug 24 05:49:31.769: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Aug 24 05:49:31.775: INFO: Selector matched 1 pods for map[app:redis]
Aug 24 05:49:31.775: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Aug 24 05:49:31.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-hdjvw --namespace=kubectl-4512'
Aug 24 05:49:33.014: INFO: stderr: ""
Aug 24 05:49:33.014: INFO: stdout: "Name:           redis-master-hdjvw\nNamespace:      kubectl-4512\nPriority:       0\nNode:           iruya-worker2/172.18.0.5\nStart Time:     Mon, 24 Aug 2020 05:49:27 +0000\nLabels:         app=redis\n                role=master\nAnnotations:    \nStatus:         Running\nIP:             10.244.2.39\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   containerd://83798237dc466b9b49ea2b5d771f2731b60e1246e8df1b0fbee3f8b67b394fa1\n    Image:          gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Image ID:       gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Mon, 24 Aug 2020 05:49:30 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kx947 (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-kx947:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-kx947\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age   From                    Message\n  ----    ------     ----  ----                    -------\n  Normal  Scheduled  6s    default-scheduler       Successfully assigned kubectl-4512/redis-master-hdjvw to iruya-worker2\n  Normal  Pulled     5s    kubelet, iruya-worker2  Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n  Normal  Created    3s    kubelet, iruya-worker2  Created container redis-master\n  Normal  Started    3s    kubelet, iruya-worker2  Started container redis-master\n"
Aug 24 05:49:33.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-4512'
Aug 24 05:49:34.303: INFO: stderr: ""
Aug 24 05:49:34.303: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-4512\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/redis:1.0\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  7s    replication-controller  Created pod: redis-master-hdjvw\n"
Aug 24 05:49:34.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-4512'
Aug 24 05:49:35.478: INFO: stderr: ""
Aug 24 05:49:35.478: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-4512\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.107.216.138\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.244.2.39:6379\nSession Affinity:  None\nEvents:            \n"
Aug 24 05:49:35.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane'
Aug 24 05:49:36.772: INFO: stderr: ""
Aug 24 05:49:36.772: INFO: stdout: "Name:               iruya-control-plane\nRoles:              master\nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=iruya-control-plane\n                    kubernetes.io/os=linux\n                    node-role.kubernetes.io/master=\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 15 Aug 2020 09:34:51 +0000\nTaints:             node-role.kubernetes.io/master:NoSchedule\nUnschedulable:      false\nConditions:\n  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----             ------  -----------------                 ------------------                ------                       -------\n  MemoryPressure   False   Mon, 24 Aug 2020 05:48:41 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure     False   Mon, 24 Aug 2020 05:48:41 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure      False   Mon, 24 Aug 2020 05:48:41 +0000   Sat, 15 Aug 2020 09:34:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready            True    Mon, 24 Aug 2020 05:48:41 +0000   Sat, 15 Aug 2020 09:35:31 +0000   KubeletReady                 kubelet is posting ready status\nAddresses:\n  InternalIP:  172.18.0.7\n  Hostname:    iruya-control-plane\nCapacity:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nAllocatable:\n cpu:                16\n ephemeral-storage:  2303189964Ki\n hugepages-1Gi:      0\n hugepages-2Mi:      0\n memory:             131759872Ki\n pods:               110\nSystem Info:\n Machine ID:                 3ed9130db08840259d2231bd97220883\n System UUID:                e52cc602-b019-45cd-b06f-235cc5705532\n Boot ID:                    11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version:             4.15.0-109-generic\n OS Image:                   Ubuntu 20.04 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version:            v1.15.12\n Kube-Proxy Version:         v1.15.12\nPodCIDR:                     10.244.0.0/24\nNon-terminated Pods:         (9 in total)\n  Namespace                  Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                                           ------------  ----------  ---------------  -------------  ---\n  kube-system                coredns-5d4dd4b4db-6krdd                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     8d\n  kube-system                coredns-5d4dd4b4db-htp88                       100m (0%)     0 (0%)      70Mi (0%)        170Mi (0%)     8d\n  kube-system                etcd-iruya-control-plane                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         8d\n  kube-system                kindnet-gvnsh                                  100m (0%)     100m (0%)   50Mi (0%)        50Mi (0%)      8d\n  kube-system                kube-apiserver-iruya-control-plane             250m (1%)     0 (0%)      0 (0%)           0 (0%)         8d\n  kube-system                kube-controller-manager-iruya-control-plane    200m (1%)     0 (0%)      0 (0%)           0 (0%)         8d\n  kube-system                kube-proxy-ndl9h                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         8d\n  kube-system                kube-scheduler-iruya-control-plane             100m (0%)     0 (0%)      0 (0%)           0 (0%)         8d\n  local-path-storage         local-path-provisioner-668779bd7-g227z         0 (0%)        0 (0%)      0 (0%)           0 (0%)         8d\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests    Limits\n  --------           --------    ------\n  cpu                850m (5%)   100m (0%)\n  memory             190Mi (0%)  390Mi (0%)\n  ephemeral-storage  0 (0%)      0 (0%)\nEvents:              \n"
Aug 24 05:49:36.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4512'
Aug 24 05:49:37.931: INFO: stderr: ""
Aug 24 05:49:37.931: INFO: stdout: "Name:         kubectl-4512\nLabels:       e2e-framework=kubectl\n              e2e-run=13977ac3-fb95-481e-b5b9-e3a3c05a0f4f\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:49:37.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4512" for this suite.
Aug 24 05:49:59.968: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:50:00.095: INFO: namespace kubectl-4512 deletion completed in 22.153419271s

• [SLOW TEST:34.640 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl describe
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:50:00.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 24 05:50:08.329: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 24 05:50:08.338: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 24 05:50:10.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 24 05:50:10.346: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 24 05:50:12.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 24 05:50:12.346: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 24 05:50:14.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 24 05:50:14.347: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 24 05:50:16.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 24 05:50:16.346: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 24 05:50:18.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 24 05:50:18.346: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 24 05:50:20.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 24 05:50:20.345: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 24 05:50:22.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 24 05:50:22.346: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 24 05:50:24.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 24 05:50:24.346: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 24 05:50:26.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 24 05:50:26.346: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 24 05:50:28.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 24 05:50:28.347: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 24 05:50:30.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 24 05:50:30.347: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 24 05:50:32.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 24 05:50:32.346: INFO: Pod pod-with-prestop-exec-hook still exists
Aug 24 05:50:34.338: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Aug 24 05:50:34.345: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:50:34.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4550" for this suite.
Aug 24 05:50:56.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:50:56.543: INFO: namespace container-lifecycle-hook-4550 deletion completed in 22.179668465s

• [SLOW TEST:56.446 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:50:56.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 05:50:56.649: INFO: Waiting up to 5m0s for pod "downwardapi-volume-25642964-1ab3-4add-927f-a901717d5647" in namespace "projected-3688" to be "success or failure"
Aug 24 05:50:56.656: INFO: Pod "downwardapi-volume-25642964-1ab3-4add-927f-a901717d5647": Phase="Pending", Reason="", readiness=false. Elapsed: 7.134991ms
Aug 24 05:50:58.704: INFO: Pod "downwardapi-volume-25642964-1ab3-4add-927f-a901717d5647": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05417897s
Aug 24 05:51:00.711: INFO: Pod "downwardapi-volume-25642964-1ab3-4add-927f-a901717d5647": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061529383s
STEP: Saw pod success
Aug 24 05:51:00.711: INFO: Pod "downwardapi-volume-25642964-1ab3-4add-927f-a901717d5647" satisfied condition "success or failure"
Aug 24 05:51:00.717: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-25642964-1ab3-4add-927f-a901717d5647 container client-container: 
STEP: delete the pod
Aug 24 05:51:00.768: INFO: Waiting for pod downwardapi-volume-25642964-1ab3-4add-927f-a901717d5647 to disappear
Aug 24 05:51:00.776: INFO: Pod downwardapi-volume-25642964-1ab3-4add-927f-a901717d5647 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:51:00.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3688" for this suite.
Aug 24 05:51:06.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:51:06.932: INFO: namespace projected-3688 deletion completed in 6.147332357s

• [SLOW TEST:10.387 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:51:06.937: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 24 05:51:07.037: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2c01c877-93a7-4fe8-a46d-b4d29c59bc25" in namespace "projected-7573" to be "success or failure"
Aug 24 05:51:07.045: INFO: Pod "downwardapi-volume-2c01c877-93a7-4fe8-a46d-b4d29c59bc25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.35767ms
Aug 24 05:51:09.154: INFO: Pod "downwardapi-volume-2c01c877-93a7-4fe8-a46d-b4d29c59bc25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116402571s
Aug 24 05:51:11.980: INFO: Pod "downwardapi-volume-2c01c877-93a7-4fe8-a46d-b4d29c59bc25": Phase="Running", Reason="", readiness=true. Elapsed: 4.942848448s
Aug 24 05:51:13.989: INFO: Pod "downwardapi-volume-2c01c877-93a7-4fe8-a46d-b4d29c59bc25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.951370201s
STEP: Saw pod success
Aug 24 05:51:13.989: INFO: Pod "downwardapi-volume-2c01c877-93a7-4fe8-a46d-b4d29c59bc25" satisfied condition "success or failure"
Aug 24 05:51:13.995: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2c01c877-93a7-4fe8-a46d-b4d29c59bc25 container client-container: 
STEP: delete the pod
Aug 24 05:51:14.017: INFO: Waiting for pod downwardapi-volume-2c01c877-93a7-4fe8-a46d-b4d29c59bc25 to disappear
Aug 24 05:51:14.021: INFO: Pod downwardapi-volume-2c01c877-93a7-4fe8-a46d-b4d29c59bc25 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:51:14.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7573" for this suite.
Aug 24 05:51:20.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:51:20.259: INFO: namespace projected-7573 deletion completed in 6.229335579s

• [SLOW TEST:13.323 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:51:20.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Aug 24 05:51:20.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 24 05:51:21.440: INFO: stderr: ""
Aug 24 05:51:21.440: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35471\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35471/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:51:21.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2851" for this suite.
Aug 24 05:51:27.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:51:27.624: INFO: namespace kubectl-2851 deletion completed in 6.173092427s

• [SLOW TEST:7.362 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:51:27.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:51:31.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8544" for this suite.
Aug 24 05:52:09.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:52:09.915: INFO: namespace kubelet-test-8544 deletion completed in 38.168901717s

• [SLOW TEST:42.290 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:52:09.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 24 05:52:10.024: INFO: Waiting up to 5m0s for pod "pod-45c5d124-7d6c-4b5b-a521-771ec4f945a9" in namespace "emptydir-8582" to be "success or failure"
Aug 24 05:52:10.036: INFO: Pod "pod-45c5d124-7d6c-4b5b-a521-771ec4f945a9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.744746ms
Aug 24 05:52:12.064: INFO: Pod "pod-45c5d124-7d6c-4b5b-a521-771ec4f945a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040038185s
Aug 24 05:52:14.071: INFO: Pod "pod-45c5d124-7d6c-4b5b-a521-771ec4f945a9": Phase="Running", Reason="", readiness=true. Elapsed: 4.046752165s
Aug 24 05:52:16.076: INFO: Pod "pod-45c5d124-7d6c-4b5b-a521-771ec4f945a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051828597s
STEP: Saw pod success
Aug 24 05:52:16.076: INFO: Pod "pod-45c5d124-7d6c-4b5b-a521-771ec4f945a9" satisfied condition "success or failure"
Aug 24 05:52:16.081: INFO: Trying to get logs from node iruya-worker2 pod pod-45c5d124-7d6c-4b5b-a521-771ec4f945a9 container test-container: 
STEP: delete the pod
Aug 24 05:52:16.102: INFO: Waiting for pod pod-45c5d124-7d6c-4b5b-a521-771ec4f945a9 to disappear
Aug 24 05:52:16.122: INFO: Pod pod-45c5d124-7d6c-4b5b-a521-771ec4f945a9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:52:16.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8582" for this suite.
Aug 24 05:52:22.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:52:22.446: INFO: namespace emptydir-8582 deletion completed in 6.315278461s

• [SLOW TEST:12.526 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:52:22.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-3f687ff8-64c8-4a83-a059-7c2359b7a7cf
STEP: Creating secret with name s-test-opt-upd-db1fab84-6874-47da-aadf-e72e7d777bf5
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3f687ff8-64c8-4a83-a059-7c2359b7a7cf
STEP: Updating secret s-test-opt-upd-db1fab84-6874-47da-aadf-e72e7d777bf5
STEP: Creating secret with name s-test-opt-create-1d1dbf74-e2d6-4fff-8b43-1efc26703fc0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:52:36.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8851" for this suite.
Aug 24 05:53:00.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:53:00.927: INFO: namespace projected-8851 deletion completed in 24.156625407s

• [SLOW TEST:38.480 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:53:00.929: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-d62f04fa-c0a2-4ced-ae09-77a1242d10f1
STEP: Creating a pod to test consume secrets
Aug 24 05:53:01.042: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b7b97d81-c9cb-457d-9e58-300d71f302d4" in namespace "projected-8472" to be "success or failure"
Aug 24 05:53:01.055: INFO: Pod "pod-projected-secrets-b7b97d81-c9cb-457d-9e58-300d71f302d4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.905435ms
Aug 24 05:53:03.135: INFO: Pod "pod-projected-secrets-b7b97d81-c9cb-457d-9e58-300d71f302d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093191777s
Aug 24 05:53:05.144: INFO: Pod "pod-projected-secrets-b7b97d81-c9cb-457d-9e58-300d71f302d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101693955s
STEP: Saw pod success
Aug 24 05:53:05.144: INFO: Pod "pod-projected-secrets-b7b97d81-c9cb-457d-9e58-300d71f302d4" satisfied condition "success or failure"
Aug 24 05:53:05.149: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-b7b97d81-c9cb-457d-9e58-300d71f302d4 container projected-secret-volume-test: 
STEP: delete the pod
Aug 24 05:53:05.193: INFO: Waiting for pod pod-projected-secrets-b7b97d81-c9cb-457d-9e58-300d71f302d4 to disappear
Aug 24 05:53:05.280: INFO: Pod pod-projected-secrets-b7b97d81-c9cb-457d-9e58-300d71f302d4 no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:53:05.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8472" for this suite.
Aug 24 05:53:11.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:53:11.459: INFO: namespace projected-8472 deletion completed in 6.152064075s

• [SLOW TEST:10.531 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:53:11.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 24 05:53:11.541: INFO: Waiting up to 5m0s for pod "pod-67fbcded-fbb2-4dc3-b1ec-e76311fbc8f1" in namespace "emptydir-6952" to be "success or failure"
Aug 24 05:53:11.546: INFO: Pod "pod-67fbcded-fbb2-4dc3-b1ec-e76311fbc8f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.184403ms
Aug 24 05:53:13.794: INFO: Pod "pod-67fbcded-fbb2-4dc3-b1ec-e76311fbc8f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.252464845s
Aug 24 05:53:15.799: INFO: Pod "pod-67fbcded-fbb2-4dc3-b1ec-e76311fbc8f1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.257033614s
Aug 24 05:53:18.069: INFO: Pod "pod-67fbcded-fbb2-4dc3-b1ec-e76311fbc8f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.527490724s
STEP: Saw pod success
Aug 24 05:53:18.069: INFO: Pod "pod-67fbcded-fbb2-4dc3-b1ec-e76311fbc8f1" satisfied condition "success or failure"
Aug 24 05:53:18.073: INFO: Trying to get logs from node iruya-worker2 pod pod-67fbcded-fbb2-4dc3-b1ec-e76311fbc8f1 container test-container: 
STEP: delete the pod
Aug 24 05:53:18.579: INFO: Waiting for pod pod-67fbcded-fbb2-4dc3-b1ec-e76311fbc8f1 to disappear
Aug 24 05:53:18.600: INFO: Pod pod-67fbcded-fbb2-4dc3-b1ec-e76311fbc8f1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:53:18.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6952" for this suite.
Aug 24 05:53:24.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:53:25.094: INFO: namespace emptydir-6952 deletion completed in 6.483301483s

• [SLOW TEST:13.634 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:53:25.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Aug 24 05:53:35.259: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 24 05:53:35.266: INFO: Pod pod-with-prestop-http-hook still exists
Aug 24 05:53:37.266: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 24 05:53:37.273: INFO: Pod pod-with-prestop-http-hook still exists
Aug 24 05:53:39.266: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 24 05:53:39.274: INFO: Pod pod-with-prestop-http-hook still exists
Aug 24 05:53:41.266: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Aug 24 05:53:41.273: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:53:41.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2586" for this suite.
Aug 24 05:54:03.324: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:54:03.436: INFO: namespace container-lifecycle-hook-2586 deletion completed in 22.145964313s

• [SLOW TEST:38.341 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:54:03.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7551.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7551.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7551.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7551.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 24 05:54:13.588: INFO: DNS probes using dns-test-2b5329d0-6bcd-4997-8c58-ed68fe454ad7 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7551.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7551.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7551.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7551.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 24 05:54:24.059: INFO: File wheezy_udp@dns-test-service-3.dns-7551.svc.cluster.local from pod  dns-7551/dns-test-981c76c6-f044-45f2-a113-d15a6b9cf681 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 24 05:54:24.064: INFO: File jessie_udp@dns-test-service-3.dns-7551.svc.cluster.local from pod  dns-7551/dns-test-981c76c6-f044-45f2-a113-d15a6b9cf681 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 24 05:54:24.064: INFO: Lookups using dns-7551/dns-test-981c76c6-f044-45f2-a113-d15a6b9cf681 failed for: [wheezy_udp@dns-test-service-3.dns-7551.svc.cluster.local jessie_udp@dns-test-service-3.dns-7551.svc.cluster.local]

Aug 24 05:54:29.071: INFO: File wheezy_udp@dns-test-service-3.dns-7551.svc.cluster.local from pod  dns-7551/dns-test-981c76c6-f044-45f2-a113-d15a6b9cf681 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 24 05:54:29.076: INFO: File jessie_udp@dns-test-service-3.dns-7551.svc.cluster.local from pod  dns-7551/dns-test-981c76c6-f044-45f2-a113-d15a6b9cf681 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 24 05:54:29.076: INFO: Lookups using dns-7551/dns-test-981c76c6-f044-45f2-a113-d15a6b9cf681 failed for: [wheezy_udp@dns-test-service-3.dns-7551.svc.cluster.local jessie_udp@dns-test-service-3.dns-7551.svc.cluster.local]

Aug 24 05:54:34.252: INFO: File wheezy_udp@dns-test-service-3.dns-7551.svc.cluster.local from pod  dns-7551/dns-test-981c76c6-f044-45f2-a113-d15a6b9cf681 contains '' instead of 'bar.example.com.'
Aug 24 05:54:34.258: INFO: File jessie_udp@dns-test-service-3.dns-7551.svc.cluster.local from pod  dns-7551/dns-test-981c76c6-f044-45f2-a113-d15a6b9cf681 contains '' instead of 'bar.example.com.'
Aug 24 05:54:34.258: INFO: Lookups using dns-7551/dns-test-981c76c6-f044-45f2-a113-d15a6b9cf681 failed for: [wheezy_udp@dns-test-service-3.dns-7551.svc.cluster.local jessie_udp@dns-test-service-3.dns-7551.svc.cluster.local]

Aug 24 05:54:39.084: INFO: File wheezy_udp@dns-test-service-3.dns-7551.svc.cluster.local from pod  dns-7551/dns-test-981c76c6-f044-45f2-a113-d15a6b9cf681 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 24 05:54:39.089: INFO: File jessie_udp@dns-test-service-3.dns-7551.svc.cluster.local from pod  dns-7551/dns-test-981c76c6-f044-45f2-a113-d15a6b9cf681 contains 'foo.example.com.
' instead of 'bar.example.com.'
Aug 24 05:54:39.090: INFO: Lookups using dns-7551/dns-test-981c76c6-f044-45f2-a113-d15a6b9cf681 failed for: [wheezy_udp@dns-test-service-3.dns-7551.svc.cluster.local jessie_udp@dns-test-service-3.dns-7551.svc.cluster.local]

Aug 24 05:54:44.078: INFO: DNS probes using dns-test-981c76c6-f044-45f2-a113-d15a6b9cf681 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7551.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7551.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7551.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7551.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 24 05:54:52.755: INFO: DNS probes using dns-test-c1334e1f-5f4b-4bd2-9140-b36fd324411a succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:54:52.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7551" for this suite.
Aug 24 05:55:01.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:55:01.133: INFO: namespace dns-7551 deletion completed in 8.262097645s

• [SLOW TEST:57.695 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:55:01.137: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Aug 24 05:55:01.669: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-8922" to be "success or failure"
Aug 24 05:55:01.775: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 106.073264ms
Aug 24 05:55:03.821: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152329865s
Aug 24 05:55:06.019: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.35027117s
Aug 24 05:55:08.025: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.356105939s
Aug 24 05:55:10.031: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.362367126s
STEP: Saw pod success
Aug 24 05:55:10.031: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Aug 24 05:55:10.036: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Aug 24 05:55:10.353: INFO: Waiting for pod pod-host-path-test to disappear
Aug 24 05:55:10.441: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:55:10.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-8922" for this suite.
Aug 24 05:55:18.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:55:18.779: INFO: namespace hostpath-8922 deletion completed in 8.327395438s

• [SLOW TEST:17.642 seconds]
[sig-storage] HostPath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 24 05:55:18.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 24 05:55:19.144: INFO: Waiting up to 5m0s for pod "downward-api-126561a4-d524-4dd3-9582-c17140fd1a1e" in namespace "downward-api-4700" to be "success or failure"
Aug 24 05:55:19.366: INFO: Pod "downward-api-126561a4-d524-4dd3-9582-c17140fd1a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 221.821502ms
Aug 24 05:55:21.373: INFO: Pod "downward-api-126561a4-d524-4dd3-9582-c17140fd1a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.22837421s
Aug 24 05:55:23.378: INFO: Pod "downward-api-126561a4-d524-4dd3-9582-c17140fd1a1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.234113838s
Aug 24 05:55:25.386: INFO: Pod "downward-api-126561a4-d524-4dd3-9582-c17140fd1a1e": Phase="Running", Reason="", readiness=true. Elapsed: 6.241887812s
Aug 24 05:55:27.393: INFO: Pod "downward-api-126561a4-d524-4dd3-9582-c17140fd1a1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.249277542s
STEP: Saw pod success
Aug 24 05:55:27.394: INFO: Pod "downward-api-126561a4-d524-4dd3-9582-c17140fd1a1e" satisfied condition "success or failure"
Aug 24 05:55:27.785: INFO: Trying to get logs from node iruya-worker2 pod downward-api-126561a4-d524-4dd3-9582-c17140fd1a1e container dapi-container: 
STEP: delete the pod
Aug 24 05:55:27.959: INFO: Waiting for pod downward-api-126561a4-d524-4dd3-9582-c17140fd1a1e to disappear
Aug 24 05:55:28.000: INFO: Pod downward-api-126561a4-d524-4dd3-9582-c17140fd1a1e no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 24 05:55:28.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4700" for this suite.
Aug 24 05:55:34.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 24 05:55:34.178: INFO: namespace downward-api-4700 deletion completed in 6.169482148s

• [SLOW TEST:15.397 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SAug 24 05:55:34.180: INFO: Running AfterSuite actions on all nodes
Aug 24 05:55:34.181: INFO: Running AfterSuite actions on node 1
Aug 24 05:55:34.182: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 6820.365 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS