I0821 18:51:05.988002 6 e2e.go:243] Starting e2e run "cb749b2a-50e2-42fc-bf79-e41eda326909" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1598035864 - Will randomize all specs Will run 215 of 4413 specs Aug 21 18:51:06.183: INFO: >>> kubeConfig: /root/.kube/config Aug 21 18:51:06.187: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Aug 21 18:51:06.212: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Aug 21 18:51:06.241: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Aug 21 18:51:06.241: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Aug 21 18:51:06.241: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Aug 21 18:51:06.250: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Aug 21 18:51:06.250: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Aug 21 18:51:06.250: INFO: e2e test version: v1.15.12 Aug 21 18:51:06.250: INFO: kube-apiserver version: v1.15.12 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:51:06.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Aug 21 18:51:06.886: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 21 18:51:06.910: INFO: Waiting up to 5m0s for pod "pod-65281df7-3c05-4edb-942d-0a2756d56951" in namespace "emptydir-5725" to be "success or failure" Aug 21 18:51:07.099: INFO: Pod "pod-65281df7-3c05-4edb-942d-0a2756d56951": Phase="Pending", Reason="", readiness=false. Elapsed: 188.602553ms Aug 21 18:51:09.154: INFO: Pod "pod-65281df7-3c05-4edb-942d-0a2756d56951": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243403213s Aug 21 18:51:11.261: INFO: Pod "pod-65281df7-3c05-4edb-942d-0a2756d56951": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.350991675s STEP: Saw pod success Aug 21 18:51:11.261: INFO: Pod "pod-65281df7-3c05-4edb-942d-0a2756d56951" satisfied condition "success or failure" Aug 21 18:51:11.264: INFO: Trying to get logs from node iruya-worker2 pod pod-65281df7-3c05-4edb-942d-0a2756d56951 container test-container: STEP: delete the pod Aug 21 18:51:11.306: INFO: Waiting for pod pod-65281df7-3c05-4edb-942d-0a2756d56951 to disappear Aug 21 18:51:11.483: INFO: Pod pod-65281df7-3c05-4edb-942d-0a2756d56951 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:51:11.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5725" for this suite. Aug 21 18:51:17.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:51:17.613: INFO: namespace emptydir-5725 deletion completed in 6.125915252s • [SLOW TEST:11.363 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:51:17.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-7a59574c-1e50-46bf-89a4-c99d8349fb84 STEP: Creating a pod to test consume secrets Aug 21 18:51:17.776: INFO: Waiting up to 5m0s for pod "pod-secrets-91747b81-479d-466a-b05b-bc42d90b6916" in namespace "secrets-7754" to be "success or failure" Aug 21 18:51:17.787: INFO: Pod "pod-secrets-91747b81-479d-466a-b05b-bc42d90b6916": Phase="Pending", Reason="", readiness=false. Elapsed: 11.211191ms Aug 21 18:51:20.010: INFO: Pod "pod-secrets-91747b81-479d-466a-b05b-bc42d90b6916": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234583436s Aug 21 18:51:22.088: INFO: Pod "pod-secrets-91747b81-479d-466a-b05b-bc42d90b6916": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.312108914s STEP: Saw pod success Aug 21 18:51:22.088: INFO: Pod "pod-secrets-91747b81-479d-466a-b05b-bc42d90b6916" satisfied condition "success or failure" Aug 21 18:51:22.153: INFO: Trying to get logs from node iruya-worker pod pod-secrets-91747b81-479d-466a-b05b-bc42d90b6916 container secret-volume-test: STEP: delete the pod Aug 21 18:51:22.308: INFO: Waiting for pod pod-secrets-91747b81-479d-466a-b05b-bc42d90b6916 to disappear Aug 21 18:51:22.447: INFO: Pod pod-secrets-91747b81-479d-466a-b05b-bc42d90b6916 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:51:22.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7754" for this suite. Aug 21 18:51:28.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:51:28.630: INFO: namespace secrets-7754 deletion completed in 6.178448599s • [SLOW TEST:11.016 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:51:28.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-73743faf-29c8-46d3-b584-34797fe00085 STEP: Creating a pod to test consume secrets Aug 21 18:51:28.739: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-39323d88-85db-4e6e-a055-a75dde3dcdcd" in namespace "projected-3557" to be "success or failure" Aug 21 18:51:28.747: INFO: Pod "pod-projected-secrets-39323d88-85db-4e6e-a055-a75dde3dcdcd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.514637ms Aug 21 18:51:30.752: INFO: Pod "pod-projected-secrets-39323d88-85db-4e6e-a055-a75dde3dcdcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012870734s Aug 21 18:51:32.756: INFO: Pod "pod-projected-secrets-39323d88-85db-4e6e-a055-a75dde3dcdcd": Phase="Running", Reason="", readiness=true. Elapsed: 4.017129761s Aug 21 18:51:34.760: INFO: Pod "pod-projected-secrets-39323d88-85db-4e6e-a055-a75dde3dcdcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021339296s STEP: Saw pod success Aug 21 18:51:34.760: INFO: Pod "pod-projected-secrets-39323d88-85db-4e6e-a055-a75dde3dcdcd" satisfied condition "success or failure" Aug 21 18:51:34.763: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-39323d88-85db-4e6e-a055-a75dde3dcdcd container projected-secret-volume-test: STEP: delete the pod Aug 21 18:51:34.804: INFO: Waiting for pod pod-projected-secrets-39323d88-85db-4e6e-a055-a75dde3dcdcd to disappear Aug 21 18:51:34.807: INFO: Pod pod-projected-secrets-39323d88-85db-4e6e-a055-a75dde3dcdcd no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:51:34.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3557" for this suite. Aug 21 18:51:40.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:51:40.892: INFO: namespace projected-3557 deletion completed in 6.082230806s • [SLOW TEST:12.262 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:51:40.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 21 18:51:41.091: INFO: Waiting up to 5m0s for pod "downward-api-a68e708f-6af9-4acd-961f-814289e0a367" in namespace "downward-api-5233" to be "success or failure" Aug 21 18:51:41.100: INFO: Pod "downward-api-a68e708f-6af9-4acd-961f-814289e0a367": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199235ms Aug 21 18:51:43.103: INFO: Pod "downward-api-a68e708f-6af9-4acd-961f-814289e0a367": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011461623s Aug 21 18:51:45.106: INFO: Pod "downward-api-a68e708f-6af9-4acd-961f-814289e0a367": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015061835s STEP: Saw pod success Aug 21 18:51:45.106: INFO: Pod "downward-api-a68e708f-6af9-4acd-961f-814289e0a367" satisfied condition "success or failure" Aug 21 18:51:45.109: INFO: Trying to get logs from node iruya-worker pod downward-api-a68e708f-6af9-4acd-961f-814289e0a367 container dapi-container: STEP: delete the pod Aug 21 18:51:45.236: INFO: Waiting for pod downward-api-a68e708f-6af9-4acd-961f-814289e0a367 to disappear Aug 21 18:51:45.243: INFO: Pod downward-api-a68e708f-6af9-4acd-961f-814289e0a367 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:51:45.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5233" for this suite. Aug 21 18:51:51.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:51:51.356: INFO: namespace downward-api-5233 deletion completed in 6.109825328s • [SLOW TEST:10.463 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:51:51.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Aug 21 18:51:51.402: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 21 18:51:51.423: INFO: Waiting for terminating namespaces to be deleted... Aug 21 18:51:51.447: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Aug 21 18:51:51.452: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 21 18:51:51.452: INFO: Container kube-proxy ready: true, restart count 0 Aug 21 18:51:51.452: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 21 18:51:51.452: INFO: Container kindnet-cni ready: true, restart count 0 Aug 21 18:51:51.452: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Aug 21 18:51:51.456: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 21 18:51:51.456: INFO: Container kube-proxy ready: true, restart count 0 Aug 21 18:51:51.456: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 21 18:51:51.456: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-1314aef1-6817-4986-84d4-067a6fe828b4 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-1314aef1-6817-4986-84d4-067a6fe828b4 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-1314aef1-6817-4986-84d4-067a6fe828b4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:51:59.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7394" for this suite. Aug 21 18:52:17.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:52:17.699: INFO: namespace sched-pred-7394 deletion completed in 18.094420594s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.342 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:52:17.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0821 18:52:18.862028 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 21 18:52:18.862: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:52:18.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6756" for this suite. Aug 21 18:52:24.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:52:25.020: INFO: namespace gc-6756 deletion completed in 6.154325253s • [SLOW TEST:7.320 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:52:25.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-7353 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-7353 STEP: Deleting pre-stop pod Aug 21 18:52:38.151: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:52:38.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-7353" for this suite. Aug 21 18:53:16.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:53:16.294: INFO: namespace prestop-7353 deletion completed in 38.08360144s • [SLOW TEST:51.274 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:53:16.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 21 18:53:21.817: INFO: Successfully updated pod "pod-update-activedeadlineseconds-bc6c6277-9757-49d9-a23f-2961346dc3d1" Aug 21 18:53:21.817: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-bc6c6277-9757-49d9-a23f-2961346dc3d1" in namespace "pods-1518" to be "terminated due to deadline exceeded" Aug 21 18:53:21.852: INFO: Pod "pod-update-activedeadlineseconds-bc6c6277-9757-49d9-a23f-2961346dc3d1": Phase="Running", Reason="", readiness=true. Elapsed: 34.819451ms Aug 21 18:53:23.880: INFO: Pod "pod-update-activedeadlineseconds-bc6c6277-9757-49d9-a23f-2961346dc3d1": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.06299727s Aug 21 18:53:23.880: INFO: Pod "pod-update-activedeadlineseconds-bc6c6277-9757-49d9-a23f-2961346dc3d1" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:53:23.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1518" for this suite. Aug 21 18:53:29.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:53:29.972: INFO: namespace pods-1518 deletion completed in 6.087167659s • [SLOW TEST:13.678 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:53:29.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-77f08418-f2c7-4ce7-9ba3-3876e2ad817c STEP: Creating a pod to test consume configMaps Aug 21 18:53:30.028: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a453351a-4656-4bd7-92b2-41dcb64a9416" in namespace "projected-6097" to be "success or failure" Aug 21 18:53:30.043: INFO: Pod "pod-projected-configmaps-a453351a-4656-4bd7-92b2-41dcb64a9416": Phase="Pending", Reason="", readiness=false. Elapsed: 15.134154ms Aug 21 18:53:32.054: INFO: Pod "pod-projected-configmaps-a453351a-4656-4bd7-92b2-41dcb64a9416": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026124407s Aug 21 18:53:34.059: INFO: Pod "pod-projected-configmaps-a453351a-4656-4bd7-92b2-41dcb64a9416": Phase="Running", Reason="", readiness=true. Elapsed: 4.030963173s Aug 21 18:53:36.062: INFO: Pod "pod-projected-configmaps-a453351a-4656-4bd7-92b2-41dcb64a9416": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.03424028s STEP: Saw pod success Aug 21 18:53:36.062: INFO: Pod "pod-projected-configmaps-a453351a-4656-4bd7-92b2-41dcb64a9416" satisfied condition "success or failure" Aug 21 18:53:36.064: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-a453351a-4656-4bd7-92b2-41dcb64a9416 container projected-configmap-volume-test: STEP: delete the pod Aug 21 18:53:36.109: INFO: Waiting for pod pod-projected-configmaps-a453351a-4656-4bd7-92b2-41dcb64a9416 to disappear Aug 21 18:53:36.133: INFO: Pod pod-projected-configmaps-a453351a-4656-4bd7-92b2-41dcb64a9416 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:53:36.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6097" for this suite. Aug 21 18:53:42.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:53:42.263: INFO: namespace projected-6097 deletion completed in 6.127522505s • [SLOW TEST:12.290 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:53:42.263: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-bfe70297-6c71-4986-a37f-5f0922636d2c STEP: Creating a pod to test consume configMaps Aug 21 18:53:42.380: INFO: Waiting up to 5m0s for pod "pod-configmaps-b216c1f4-3d15-44c5-8dde-921ebdd0d07e" in namespace "configmap-7748" to be "success or failure" Aug 21 18:53:42.397: INFO: Pod "pod-configmaps-b216c1f4-3d15-44c5-8dde-921ebdd0d07e": Phase="Pending", Reason="", readiness=false. Elapsed: 17.438546ms Aug 21 18:53:44.594: INFO: Pod "pod-configmaps-b216c1f4-3d15-44c5-8dde-921ebdd0d07e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213958166s Aug 21 18:53:46.598: INFO: Pod "pod-configmaps-b216c1f4-3d15-44c5-8dde-921ebdd0d07e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.217992057s STEP: Saw pod success Aug 21 18:53:46.598: INFO: Pod "pod-configmaps-b216c1f4-3d15-44c5-8dde-921ebdd0d07e" satisfied condition "success or failure" Aug 21 18:53:46.601: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b216c1f4-3d15-44c5-8dde-921ebdd0d07e container configmap-volume-test: STEP: delete the pod Aug 21 18:53:46.787: INFO: Waiting for pod pod-configmaps-b216c1f4-3d15-44c5-8dde-921ebdd0d07e to disappear Aug 21 18:53:46.971: INFO: Pod pod-configmaps-b216c1f4-3d15-44c5-8dde-921ebdd0d07e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:53:46.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7748" for this suite. Aug 21 18:53:53.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:53:53.088: INFO: namespace configmap-7748 deletion completed in 6.113931242s • [SLOW TEST:10.825 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:53:53.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-9a446118-ff67-49db-a870-27988c9b3e91 STEP: Creating a pod to test consume configMaps Aug 21 18:53:53.806: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cf210bea-4c47-44b0-8c34-50a5510f1806" in namespace "projected-5267" to be "success or failure" Aug 21 18:53:53.923: INFO: Pod "pod-projected-configmaps-cf210bea-4c47-44b0-8c34-50a5510f1806": Phase="Pending", Reason="", readiness=false. Elapsed: 117.31274ms Aug 21 18:53:55.932: INFO: Pod "pod-projected-configmaps-cf210bea-4c47-44b0-8c34-50a5510f1806": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126218618s Aug 21 18:53:57.936: INFO: Pod "pod-projected-configmaps-cf210bea-4c47-44b0-8c34-50a5510f1806": Phase="Running", Reason="", readiness=true. Elapsed: 4.130106739s Aug 21 18:53:59.940: INFO: Pod "pod-projected-configmaps-cf210bea-4c47-44b0-8c34-50a5510f1806": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.134801284s STEP: Saw pod success Aug 21 18:53:59.941: INFO: Pod "pod-projected-configmaps-cf210bea-4c47-44b0-8c34-50a5510f1806" satisfied condition "success or failure" Aug 21 18:53:59.943: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-cf210bea-4c47-44b0-8c34-50a5510f1806 container projected-configmap-volume-test: STEP: delete the pod Aug 21 18:53:59.980: INFO: Waiting for pod pod-projected-configmaps-cf210bea-4c47-44b0-8c34-50a5510f1806 to disappear Aug 21 18:53:59.984: INFO: Pod pod-projected-configmaps-cf210bea-4c47-44b0-8c34-50a5510f1806 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:53:59.984: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5267" for this suite. Aug 21 18:54:06.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:54:06.166: INFO: namespace projected-5267 deletion completed in 6.177518953s • [SLOW TEST:13.077 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:54:06.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-972f16fa-e883-4562-9b09-cc13f3bea904 STEP: Creating a pod to test consume secrets Aug 21 18:54:07.780: INFO: Waiting up to 5m0s for pod "pod-secrets-89beb779-12d7-46cc-b14d-a5edd96651b0" in namespace "secrets-8262" to be "success or failure" Aug 21 18:54:07.947: INFO: Pod "pod-secrets-89beb779-12d7-46cc-b14d-a5edd96651b0": Phase="Pending", Reason="", readiness=false. Elapsed: 167.262692ms Aug 21 18:54:09.954: INFO: Pod "pod-secrets-89beb779-12d7-46cc-b14d-a5edd96651b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.174639041s Aug 21 18:54:12.019: INFO: Pod "pod-secrets-89beb779-12d7-46cc-b14d-a5edd96651b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.239418917s Aug 21 18:54:14.025: INFO: Pod "pod-secrets-89beb779-12d7-46cc-b14d-a5edd96651b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.245606751s Aug 21 18:54:16.029: INFO: Pod "pod-secrets-89beb779-12d7-46cc-b14d-a5edd96651b0": Phase="Running", Reason="", readiness=true. Elapsed: 8.249411987s Aug 21 18:54:18.033: INFO: Pod "pod-secrets-89beb779-12d7-46cc-b14d-a5edd96651b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.253364371s STEP: Saw pod success Aug 21 18:54:18.033: INFO: Pod "pod-secrets-89beb779-12d7-46cc-b14d-a5edd96651b0" satisfied condition "success or failure" Aug 21 18:54:18.036: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-89beb779-12d7-46cc-b14d-a5edd96651b0 container secret-volume-test: STEP: delete the pod Aug 21 18:54:18.059: INFO: Waiting for pod pod-secrets-89beb779-12d7-46cc-b14d-a5edd96651b0 to disappear Aug 21 18:54:18.063: INFO: Pod pod-secrets-89beb779-12d7-46cc-b14d-a5edd96651b0 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:54:18.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8262" for this suite. Aug 21 18:54:24.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:54:24.177: INFO: namespace secrets-8262 deletion completed in 6.111453177s • [SLOW TEST:18.011 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:54:24.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 18:54:24.255: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Aug 21 18:54:25.319: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:54:25.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6547" for this suite. Aug 21 18:54:31.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:54:31.675: INFO: namespace replication-controller-6547 deletion completed in 6.226785319s • [SLOW TEST:7.497 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:54:31.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 21 18:54:36.031: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:54:36.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8553" for this suite. Aug 21 18:54:42.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:54:42.190: INFO: namespace container-runtime-8553 deletion completed in 6.098145296s • [SLOW TEST:10.515 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:54:42.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:54:42.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4785" for this suite. Aug 21 18:54:48.622: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:54:48.688: INFO: namespace kubelet-test-4785 deletion completed in 6.101968115s • [SLOW TEST:6.498 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:54:48.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 21 18:54:49.042: INFO: Waiting up to 5m0s for pod "downwardapi-volume-310fd3c2-bd9d-4ae9-b771-0ecfd1c3dd56" in namespace "projected-7739" to be "success or failure" Aug 21 18:54:49.064: INFO: Pod "downwardapi-volume-310fd3c2-bd9d-4ae9-b771-0ecfd1c3dd56": Phase="Pending", Reason="", readiness=false. Elapsed: 22.187486ms Aug 21 18:54:51.110: INFO: Pod "downwardapi-volume-310fd3c2-bd9d-4ae9-b771-0ecfd1c3dd56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068157554s Aug 21 18:54:53.440: INFO: Pod "downwardapi-volume-310fd3c2-bd9d-4ae9-b771-0ecfd1c3dd56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.397312558s Aug 21 18:54:55.768: INFO: Pod "downwardapi-volume-310fd3c2-bd9d-4ae9-b771-0ecfd1c3dd56": Phase="Running", Reason="", readiness=true. Elapsed: 6.725693829s Aug 21 18:54:57.773: INFO: Pod "downwardapi-volume-310fd3c2-bd9d-4ae9-b771-0ecfd1c3dd56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.730316766s STEP: Saw pod success Aug 21 18:54:57.773: INFO: Pod "downwardapi-volume-310fd3c2-bd9d-4ae9-b771-0ecfd1c3dd56" satisfied condition "success or failure" Aug 21 18:54:57.775: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-310fd3c2-bd9d-4ae9-b771-0ecfd1c3dd56 container client-container: STEP: delete the pod Aug 21 18:54:57.885: INFO: Waiting for pod downwardapi-volume-310fd3c2-bd9d-4ae9-b771-0ecfd1c3dd56 to disappear Aug 21 18:54:57.944: INFO: Pod downwardapi-volume-310fd3c2-bd9d-4ae9-b771-0ecfd1c3dd56 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:54:57.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7739" for this suite. Aug 21 18:55:04.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:55:04.107: INFO: namespace projected-7739 deletion completed in 6.157507502s • [SLOW TEST:15.418 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:55:04.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4763/configmap-test-92c74d66-25ae-4e66-bbd8-a7ef6283afdc STEP: Creating a pod to test consume configMaps Aug 21 18:55:04.210: INFO: Waiting up to 5m0s for pod "pod-configmaps-2cd50eed-1403-481e-9b5b-4502dcaa6c84" in namespace "configmap-4763" to be "success or failure" Aug 21 18:55:04.226: INFO: Pod "pod-configmaps-2cd50eed-1403-481e-9b5b-4502dcaa6c84": Phase="Pending", Reason="", readiness=false. Elapsed: 15.714796ms Aug 21 18:55:06.308: INFO: Pod "pod-configmaps-2cd50eed-1403-481e-9b5b-4502dcaa6c84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097393616s Aug 21 18:55:08.311: INFO: Pod "pod-configmaps-2cd50eed-1403-481e-9b5b-4502dcaa6c84": Phase="Running", Reason="", readiness=true. Elapsed: 4.100966901s Aug 21 18:55:10.337: INFO: Pod "pod-configmaps-2cd50eed-1403-481e-9b5b-4502dcaa6c84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.126731771s STEP: Saw pod success Aug 21 18:55:10.337: INFO: Pod "pod-configmaps-2cd50eed-1403-481e-9b5b-4502dcaa6c84" satisfied condition "success or failure" Aug 21 18:55:10.340: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-2cd50eed-1403-481e-9b5b-4502dcaa6c84 container env-test: STEP: delete the pod Aug 21 18:55:10.426: INFO: Waiting for pod pod-configmaps-2cd50eed-1403-481e-9b5b-4502dcaa6c84 to disappear Aug 21 18:55:10.505: INFO: Pod pod-configmaps-2cd50eed-1403-481e-9b5b-4502dcaa6c84 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:55:10.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4763" for this suite. Aug 21 18:55:16.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:55:16.603: INFO: namespace configmap-4763 deletion completed in 6.093801506s • [SLOW TEST:12.496 seconds] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:55:16.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 18:55:16.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3289' Aug 21 18:55:19.699: INFO: stderr: "" Aug 21 18:55:19.699: INFO: stdout: "replicationcontroller/redis-master created\n" Aug 21 18:55:19.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3289' Aug 21 18:55:19.985: INFO: stderr: "" Aug 21 18:55:19.985: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Aug 21 18:55:20.991: INFO: Selector matched 1 pods for map[app:redis] Aug 21 18:55:20.991: INFO: Found 0 / 1 Aug 21 18:55:21.991: INFO: Selector matched 1 pods for map[app:redis] Aug 21 18:55:21.991: INFO: Found 0 / 1 Aug 21 18:55:22.990: INFO: Selector matched 1 pods for map[app:redis] Aug 21 18:55:22.990: INFO: Found 0 / 1 Aug 21 18:55:24.002: INFO: Selector matched 1 pods for map[app:redis] Aug 21 18:55:24.002: INFO: Found 1 / 1 Aug 21 18:55:24.002: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 21 18:55:24.005: INFO: Selector matched 1 pods for map[app:redis] Aug 21 18:55:24.005: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 21 18:55:24.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-246tm --namespace=kubectl-3289' Aug 21 18:55:24.159: INFO: stderr: "" Aug 21 18:55:24.159: INFO: stdout: "Name: redis-master-246tm\nNamespace: kubectl-3289\nPriority: 0\nNode: iruya-worker/172.18.0.9\nStart Time: Fri, 21 Aug 2020 18:55:19 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.108\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://75c5b981d0f501d1b6a7e85aa3162bbe56e5cd49fd19d2ab43fdd743ab2cef77\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 21 Aug 2020 18:55:22 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-lqxhw (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-lqxhw:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-lqxhw\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-3289/redis-master-246tm to iruya-worker\n Normal Pulled 3s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker Created container redis-master\n Normal Started 2s kubelet, iruya-worker Started container redis-master\n" Aug 21 18:55:24.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-3289' Aug 21 18:55:24.290: INFO: stderr: "" Aug 21 18:55:24.290: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3289\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-246tm\n" Aug 21 18:55:24.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-3289' Aug 21 18:55:24.407: INFO: stderr: "" Aug 21 18:55:24.407: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3289\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.199.144\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.108:6379\nSession Affinity: None\nEvents: \n" Aug 21 18:55:24.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Aug 21 18:55:24.560: INFO: stderr: "" Aug 21 18:55:24.560: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sat, 15 Aug 2020 09:34:51 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 21 Aug 2020 18:54:31 +0000 Sat, 15 Aug 2020 09:34:48 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 21 Aug 2020 18:54:31 +0000 Sat, 15 Aug 2020 09:34:48 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 21 Aug 2020 18:54:31 +0000 Sat, 15 Aug 2020 09:34:48 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 21 Aug 2020 18:54:31 +0000 Sat, 15 Aug 2020 09:35:31 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759872Ki\n pods: 110\nSystem Info:\n Machine ID: 3ed9130db08840259d2231bd97220883\n System UUID: e52cc602-b019-45cd-b06f-235cc5705532\n Boot ID: 11738d2d-5baa-4089-8e7f-2fb0329fce58\n Kernel Version: 4.15.0-109-generic\n OS Image: Ubuntu 20.04 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.4.0-beta.1-85-g334f567e\n Kubelet Version: v1.15.12\n Kube-Proxy Version: v1.15.12\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-5d4dd4b4db-6krdd 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 6d9h\n kube-system coredns-5d4dd4b4db-htp88 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 6d9h\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d9h\n kube-system kindnet-gvnsh 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 6d9h\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 6d9h\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 6d9h\n kube-system kube-proxy-ndl9h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d9h\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 6d9h\n local-path-storage local-path-provisioner-668779bd7-g227z 0 (0%) 0 (0%) 0 (0%) 0 (0%) 6d9h\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Aug 21 18:55:24.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3289' Aug 21 18:55:24.659: INFO: stderr: "" Aug 21 18:55:24.659: INFO: stdout: "Name: kubectl-3289\nLabels: e2e-framework=kubectl\n e2e-run=cb749b2a-50e2-42fc-bf79-e41eda326909\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:55:24.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3289" for this suite. Aug 21 18:55:48.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:55:48.779: INFO: namespace kubectl-3289 deletion completed in 24.116495019s • [SLOW TEST:32.175 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:55:48.779: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-1a43c37e-30fb-4dda-a7f8-9b448bfffc06 STEP: Creating a pod to test consume secrets Aug 21 18:55:49.063: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-0f7ea2cd-a60e-4f08-ae26-1be35cf79c17" in namespace "projected-911" to be "success or failure" Aug 21 18:55:49.119: INFO: Pod "pod-projected-secrets-0f7ea2cd-a60e-4f08-ae26-1be35cf79c17": Phase="Pending", Reason="", readiness=false. Elapsed: 56.641495ms Aug 21 18:55:51.182: INFO: Pod "pod-projected-secrets-0f7ea2cd-a60e-4f08-ae26-1be35cf79c17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.119527968s Aug 21 18:55:53.187: INFO: Pod "pod-projected-secrets-0f7ea2cd-a60e-4f08-ae26-1be35cf79c17": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124010792s Aug 21 18:55:55.190: INFO: Pod "pod-projected-secrets-0f7ea2cd-a60e-4f08-ae26-1be35cf79c17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.127197797s STEP: Saw pod success Aug 21 18:55:55.190: INFO: Pod "pod-projected-secrets-0f7ea2cd-a60e-4f08-ae26-1be35cf79c17" satisfied condition "success or failure" Aug 21 18:55:55.193: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-0f7ea2cd-a60e-4f08-ae26-1be35cf79c17 container projected-secret-volume-test: STEP: delete the pod Aug 21 18:55:55.454: INFO: Waiting for pod pod-projected-secrets-0f7ea2cd-a60e-4f08-ae26-1be35cf79c17 to disappear Aug 21 18:55:55.484: INFO: Pod pod-projected-secrets-0f7ea2cd-a60e-4f08-ae26-1be35cf79c17 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:55:55.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-911" for this suite. Aug 21 18:56:03.531: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:56:03.596: INFO: namespace projected-911 deletion completed in 8.108088782s • [SLOW TEST:14.817 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:56:03.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-mb4c STEP: Creating a pod to test atomic-volume-subpath Aug 21 18:56:03.998: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mb4c" in namespace "subpath-9235" to be "success or failure" Aug 21 18:56:04.001: INFO: Pod "pod-subpath-test-downwardapi-mb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.576982ms Aug 21 18:56:06.004: INFO: Pod "pod-subpath-test-downwardapi-mb4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006116536s Aug 21 18:56:08.007: INFO: Pod "pod-subpath-test-downwardapi-mb4c": Phase="Running", Reason="", readiness=true. Elapsed: 4.009087816s Aug 21 18:56:10.011: INFO: Pod "pod-subpath-test-downwardapi-mb4c": Phase="Running", Reason="", readiness=true. Elapsed: 6.012614744s Aug 21 18:56:12.015: INFO: Pod "pod-subpath-test-downwardapi-mb4c": Phase="Running", Reason="", readiness=true. Elapsed: 8.016620625s Aug 21 18:56:14.019: INFO: Pod "pod-subpath-test-downwardapi-mb4c": Phase="Running", Reason="", readiness=true. Elapsed: 10.020707225s Aug 21 18:56:16.022: INFO: Pod "pod-subpath-test-downwardapi-mb4c": Phase="Running", Reason="", readiness=true. Elapsed: 12.024433677s Aug 21 18:56:18.027: INFO: Pod "pod-subpath-test-downwardapi-mb4c": Phase="Running", Reason="", readiness=true. Elapsed: 14.0285304s Aug 21 18:56:20.030: INFO: Pod "pod-subpath-test-downwardapi-mb4c": Phase="Running", Reason="", readiness=true. Elapsed: 16.032168947s Aug 21 18:56:22.034: INFO: Pod "pod-subpath-test-downwardapi-mb4c": Phase="Running", Reason="", readiness=true. Elapsed: 18.0362057s Aug 21 18:56:24.038: INFO: Pod "pod-subpath-test-downwardapi-mb4c": Phase="Running", Reason="", readiness=true. Elapsed: 20.03996257s Aug 21 18:56:26.042: INFO: Pod "pod-subpath-test-downwardapi-mb4c": Phase="Running", Reason="", readiness=true. Elapsed: 22.043486773s Aug 21 18:56:28.046: INFO: Pod "pod-subpath-test-downwardapi-mb4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.047923384s STEP: Saw pod success Aug 21 18:56:28.046: INFO: Pod "pod-subpath-test-downwardapi-mb4c" satisfied condition "success or failure" Aug 21 18:56:28.049: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-mb4c container test-container-subpath-downwardapi-mb4c: STEP: delete the pod Aug 21 18:56:28.211: INFO: Waiting for pod pod-subpath-test-downwardapi-mb4c to disappear Aug 21 18:56:28.288: INFO: Pod pod-subpath-test-downwardapi-mb4c no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-mb4c Aug 21 18:56:28.288: INFO: Deleting pod "pod-subpath-test-downwardapi-mb4c" in namespace "subpath-9235" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:56:28.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9235" for this suite. Aug 21 18:56:34.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:56:34.431: INFO: namespace subpath-9235 deletion completed in 6.134948214s • [SLOW TEST:30.835 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:56:34.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 18:56:34.463: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:56:38.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2739" for this suite. Aug 21 18:57:28.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:57:28.748: INFO: namespace pods-2739 deletion completed in 50.093113443s • [SLOW TEST:54.316 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:57:28.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-1089afd9-9a3e-405a-ad1c-f5d7fb039ddc STEP: Creating a pod to test consume secrets Aug 21 18:57:28.870: INFO: Waiting up to 5m0s for pod "pod-secrets-58f91ac9-341f-458b-a034-1478d0a125f8" in namespace "secrets-7935" to be "success or failure" Aug 21 18:57:28.876: INFO: Pod "pod-secrets-58f91ac9-341f-458b-a034-1478d0a125f8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.586366ms Aug 21 18:57:30.880: INFO: Pod "pod-secrets-58f91ac9-341f-458b-a034-1478d0a125f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010478852s Aug 21 18:57:32.884: INFO: Pod "pod-secrets-58f91ac9-341f-458b-a034-1478d0a125f8": Phase="Running", Reason="", readiness=true. Elapsed: 4.014444713s Aug 21 18:57:34.888: INFO: Pod "pod-secrets-58f91ac9-341f-458b-a034-1478d0a125f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.018054892s STEP: Saw pod success Aug 21 18:57:34.888: INFO: Pod "pod-secrets-58f91ac9-341f-458b-a034-1478d0a125f8" satisfied condition "success or failure" Aug 21 18:57:34.891: INFO: Trying to get logs from node iruya-worker pod pod-secrets-58f91ac9-341f-458b-a034-1478d0a125f8 container secret-volume-test: STEP: delete the pod Aug 21 18:57:34.907: INFO: Waiting for pod pod-secrets-58f91ac9-341f-458b-a034-1478d0a125f8 to disappear Aug 21 18:57:34.912: INFO: Pod pod-secrets-58f91ac9-341f-458b-a034-1478d0a125f8 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:57:34.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7935" for this suite. Aug 21 18:57:40.927: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:57:41.004: INFO: namespace secrets-7935 deletion completed in 6.087584417s • [SLOW TEST:12.255 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:57:41.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-7p4s STEP: Creating a pod to test atomic-volume-subpath Aug 21 18:57:41.094: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7p4s" in namespace "subpath-3390" to be "success or failure" Aug 21 18:57:41.111: INFO: Pod "pod-subpath-test-projected-7p4s": Phase="Pending", Reason="", readiness=false. Elapsed: 17.498271ms Aug 21 18:57:43.116: INFO: Pod "pod-subpath-test-projected-7p4s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02205582s Aug 21 18:57:45.119: INFO: Pod "pod-subpath-test-projected-7p4s": Phase="Running", Reason="", readiness=true. Elapsed: 4.025496707s Aug 21 18:57:47.124: INFO: Pod "pod-subpath-test-projected-7p4s": Phase="Running", Reason="", readiness=true. Elapsed: 6.029843974s Aug 21 18:57:49.128: INFO: Pod "pod-subpath-test-projected-7p4s": Phase="Running", Reason="", readiness=true. Elapsed: 8.034223609s Aug 21 18:57:51.132: INFO: Pod "pod-subpath-test-projected-7p4s": Phase="Running", Reason="", readiness=true. Elapsed: 10.038652354s Aug 21 18:57:53.155: INFO: Pod "pod-subpath-test-projected-7p4s": Phase="Running", Reason="", readiness=true. Elapsed: 12.060984962s Aug 21 18:57:55.159: INFO: Pod "pod-subpath-test-projected-7p4s": Phase="Running", Reason="", readiness=true. Elapsed: 14.065402821s Aug 21 18:57:57.163: INFO: Pod "pod-subpath-test-projected-7p4s": Phase="Running", Reason="", readiness=true. Elapsed: 16.069420829s Aug 21 18:57:59.167: INFO: Pod "pod-subpath-test-projected-7p4s": Phase="Running", Reason="", readiness=true. Elapsed: 18.073559942s Aug 21 18:58:01.171: INFO: Pod "pod-subpath-test-projected-7p4s": Phase="Running", Reason="", readiness=true. Elapsed: 20.077262179s Aug 21 18:58:03.175: INFO: Pod "pod-subpath-test-projected-7p4s": Phase="Running", Reason="", readiness=true. Elapsed: 22.081255197s Aug 21 18:58:05.179: INFO: Pod "pod-subpath-test-projected-7p4s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.084966359s STEP: Saw pod success Aug 21 18:58:05.179: INFO: Pod "pod-subpath-test-projected-7p4s" satisfied condition "success or failure" Aug 21 18:58:05.181: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-7p4s container test-container-subpath-projected-7p4s: STEP: delete the pod Aug 21 18:58:05.202: INFO: Waiting for pod pod-subpath-test-projected-7p4s to disappear Aug 21 18:58:05.218: INFO: Pod pod-subpath-test-projected-7p4s no longer exists STEP: Deleting pod pod-subpath-test-projected-7p4s Aug 21 18:58:05.218: INFO: Deleting pod "pod-subpath-test-projected-7p4s" in namespace "subpath-3390" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:58:05.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3390" for this suite. Aug 21 18:58:11.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:58:11.303: INFO: namespace subpath-3390 deletion completed in 6.07886985s • [SLOW TEST:30.299 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:58:11.304: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Aug 21 18:58:15.910: INFO: Successfully updated pod "labelsupdatef1f7eaea-d646-4f8f-b51d-8d94c63e43d1" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:58:17.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-587" for this suite. Aug 21 18:58:39.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:58:40.026: INFO: namespace projected-587 deletion completed in 22.091383559s • [SLOW TEST:28.723 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:58:40.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Aug 21 18:58:44.638: INFO: Successfully updated pod "labelsupdated0b58a99-6597-47f5-a59a-4a494c13d103" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 18:58:48.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5286" for this suite. Aug 21 18:59:10.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 18:59:10.809: INFO: namespace downward-api-5286 deletion completed in 22.14181234s • [SLOW TEST:30.782 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 18:59:10.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-7ba79bd4-39a2-4849-8a1c-30f391eba1e2 STEP: Creating secret with name s-test-opt-upd-0664ae26-51b1-4f93-a345-ed3739245f50 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-7ba79bd4-39a2-4849-8a1c-30f391eba1e2 STEP: Updating secret s-test-opt-upd-0664ae26-51b1-4f93-a345-ed3739245f50 STEP: Creating secret with name s-test-opt-create-695d5967-30e6-49b9-a95a-a93cd86c9d04 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:00:25.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6751" for this suite. Aug 21 19:00:47.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:00:47.836: INFO: namespace projected-6751 deletion completed in 22.114151551s • [SLOW TEST:97.027 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:00:47.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1778 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 21 19:00:48.218: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 21 19:01:16.675: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.197 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1778 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 19:01:16.675: INFO: >>> kubeConfig: /root/.kube/config I0821 19:01:16.716341 6 log.go:172] (0xc0012744d0) (0xc002640e60) Create stream I0821 19:01:16.716395 6 log.go:172] (0xc0012744d0) (0xc002640e60) Stream added, broadcasting: 1 I0821 19:01:16.719112 6 log.go:172] (0xc0012744d0) Reply frame received for 1 I0821 19:01:16.719160 6 log.go:172] (0xc0012744d0) (0xc002640fa0) Create stream I0821 19:01:16.719185 6 log.go:172] (0xc0012744d0) (0xc002640fa0) Stream added, broadcasting: 3 I0821 19:01:16.720153 6 log.go:172] (0xc0012744d0) Reply frame received for 3 I0821 19:01:16.720205 6 log.go:172] (0xc0012744d0) (0xc0019dc000) Create stream I0821 19:01:16.720221 6 log.go:172] (0xc0012744d0) (0xc0019dc000) Stream added, broadcasting: 5 I0821 19:01:16.721390 6 log.go:172] (0xc0012744d0) Reply frame received for 5 I0821 19:01:17.801637 6 log.go:172] (0xc0012744d0) Data frame received for 3 I0821 19:01:17.801751 6 log.go:172] (0xc002640fa0) (3) Data frame handling I0821 19:01:17.801788 6 log.go:172] (0xc002640fa0) (3) Data frame sent I0821 19:01:17.801824 6 log.go:172] (0xc0012744d0) Data frame received for 3 I0821 19:01:17.801897 6 log.go:172] (0xc002640fa0) (3) Data frame handling I0821 19:01:17.802159 6 log.go:172] (0xc0012744d0) Data frame received for 5 I0821 19:01:17.802216 6 log.go:172] (0xc0019dc000) (5) Data frame handling I0821 19:01:17.807237 6 log.go:172] (0xc0012744d0) Data frame received for 1 I0821 19:01:17.807273 6 log.go:172] (0xc002640e60) (1) Data frame handling I0821 19:01:17.807302 6 log.go:172] (0xc002640e60) (1) Data frame sent I0821 19:01:17.807331 6 log.go:172] (0xc0012744d0) (0xc002640e60) Stream removed, broadcasting: 1 I0821 19:01:17.807368 6 log.go:172] (0xc0012744d0) Go away received I0821 19:01:17.808684 6 log.go:172] (0xc0012744d0) (0xc002640e60) Stream removed, broadcasting: 1 I0821 19:01:17.809265 6 log.go:172] (0xc0012744d0) (0xc002640fa0) Stream removed, broadcasting: 3 I0821 19:01:17.809295 6 log.go:172] (0xc0012744d0) (0xc0019dc000) Stream removed, broadcasting: 5 Aug 21 19:01:17.809: INFO: Found all expected endpoints: [netserver-0] Aug 21 19:01:17.812: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.117 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1778 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 19:01:17.813: INFO: >>> kubeConfig: /root/.kube/config I0821 19:01:17.837603 6 log.go:172] (0xc00198c6e0) (0xc000982460) Create stream I0821 19:01:17.837625 6 log.go:172] (0xc00198c6e0) (0xc000982460) Stream added, broadcasting: 1 I0821 19:01:17.843011 6 log.go:172] (0xc00198c6e0) Reply frame received for 1 I0821 19:01:17.843086 6 log.go:172] (0xc00198c6e0) (0xc002bd4000) Create stream I0821 19:01:17.843107 6 log.go:172] (0xc00198c6e0) (0xc002bd4000) Stream added, broadcasting: 3 I0821 19:01:17.844222 6 log.go:172] (0xc00198c6e0) Reply frame received for 3 I0821 19:01:17.844271 6 log.go:172] (0xc00198c6e0) (0xc002bd40a0) Create stream I0821 19:01:17.844284 6 log.go:172] (0xc00198c6e0) (0xc002bd40a0) Stream added, broadcasting: 5 I0821 19:01:17.845341 6 log.go:172] (0xc00198c6e0) Reply frame received for 5 I0821 19:01:18.936264 6 log.go:172] (0xc00198c6e0) Data frame received for 3 I0821 19:01:18.936321 6 log.go:172] (0xc002bd4000) (3) Data frame handling I0821 19:01:18.936352 6 log.go:172] (0xc002bd4000) (3) Data frame sent I0821 19:01:18.936622 6 log.go:172] (0xc00198c6e0) Data frame received for 5 I0821 19:01:18.936672 6 log.go:172] (0xc00198c6e0) Data frame received for 3 I0821 19:01:18.936717 6 log.go:172] (0xc002bd4000) (3) Data frame handling I0821 19:01:18.936807 6 log.go:172] (0xc002bd40a0) (5) Data frame handling I0821 19:01:18.939178 6 log.go:172] (0xc00198c6e0) Data frame received for 1 I0821 19:01:18.939211 6 log.go:172] (0xc000982460) (1) Data frame handling I0821 19:01:18.939257 6 log.go:172] (0xc000982460) (1) Data frame sent I0821 19:01:18.939367 6 log.go:172] (0xc00198c6e0) (0xc000982460) Stream removed, broadcasting: 1 I0821 19:01:18.939399 6 log.go:172] (0xc00198c6e0) Go away received I0821 19:01:18.939561 6 log.go:172] (0xc00198c6e0) (0xc000982460) Stream removed, broadcasting: 1 I0821 19:01:18.939595 6 log.go:172] (0xc00198c6e0) (0xc002bd4000) Stream removed, broadcasting: 3 I0821 19:01:18.939622 6 log.go:172] (0xc00198c6e0) (0xc002bd40a0) Stream removed, broadcasting: 5 Aug 21 19:01:18.939: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:01:18.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1778" for this suite. Aug 21 19:01:40.962: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:01:41.037: INFO: namespace pod-network-test-1778 deletion completed in 22.093048906s • [SLOW TEST:53.201 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:01:41.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-60efdfd4-b696-4cf5-b7ab-515539335288 STEP: Creating a pod to test consume secrets Aug 21 19:01:41.130: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d3be91b8-5a11-4c14-9bf9-677c720192fb" in namespace "projected-5876" to be "success or failure" Aug 21 19:01:41.152: INFO: Pod "pod-projected-secrets-d3be91b8-5a11-4c14-9bf9-677c720192fb": Phase="Pending", Reason="", readiness=false. Elapsed: 21.533438ms Aug 21 19:01:43.156: INFO: Pod "pod-projected-secrets-d3be91b8-5a11-4c14-9bf9-677c720192fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025385024s Aug 21 19:01:45.158: INFO: Pod "pod-projected-secrets-d3be91b8-5a11-4c14-9bf9-677c720192fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02818355s STEP: Saw pod success Aug 21 19:01:45.159: INFO: Pod "pod-projected-secrets-d3be91b8-5a11-4c14-9bf9-677c720192fb" satisfied condition "success or failure" Aug 21 19:01:45.161: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-d3be91b8-5a11-4c14-9bf9-677c720192fb container projected-secret-volume-test: STEP: delete the pod Aug 21 19:01:45.187: INFO: Waiting for pod pod-projected-secrets-d3be91b8-5a11-4c14-9bf9-677c720192fb to disappear Aug 21 19:01:45.260: INFO: Pod pod-projected-secrets-d3be91b8-5a11-4c14-9bf9-677c720192fb no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:01:45.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5876" for this suite. Aug 21 19:01:51.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:01:51.468: INFO: namespace projected-5876 deletion completed in 6.20345076s • [SLOW TEST:10.430 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:01:51.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Aug 21 19:01:51.573: INFO: Waiting up to 5m0s for pod "client-containers-311e7ff2-1790-4612-ab4a-91c21f21faba" in namespace "containers-1350" to be "success or failure" Aug 21 19:01:51.581: INFO: Pod "client-containers-311e7ff2-1790-4612-ab4a-91c21f21faba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09104ms Aug 21 19:01:53.585: INFO: Pod "client-containers-311e7ff2-1790-4612-ab4a-91c21f21faba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011933861s Aug 21 19:01:55.589: INFO: Pod "client-containers-311e7ff2-1790-4612-ab4a-91c21f21faba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016204941s STEP: Saw pod success Aug 21 19:01:55.589: INFO: Pod "client-containers-311e7ff2-1790-4612-ab4a-91c21f21faba" satisfied condition "success or failure" Aug 21 19:01:55.593: INFO: Trying to get logs from node iruya-worker2 pod client-containers-311e7ff2-1790-4612-ab4a-91c21f21faba container test-container: STEP: delete the pod Aug 21 19:01:55.634: INFO: Waiting for pod client-containers-311e7ff2-1790-4612-ab4a-91c21f21faba to disappear Aug 21 19:01:55.647: INFO: Pod client-containers-311e7ff2-1790-4612-ab4a-91c21f21faba no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:01:55.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1350" for this suite. Aug 21 19:02:01.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:02:01.769: INFO: namespace containers-1350 deletion completed in 6.118671883s • [SLOW TEST:10.301 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:02:01.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9224.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9224.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 21 19:02:07.931: INFO: DNS probes using dns-9224/dns-test-3247a8b3-ce1b-46ef-a1cd-c83fe83caa1f succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:02:08.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9224" for this suite. Aug 21 19:02:14.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:02:14.339: INFO: namespace dns-9224 deletion completed in 6.25247555s • [SLOW TEST:12.569 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:02:14.339: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Aug 21 19:02:14.405: INFO: Waiting up to 5m0s for pod "pod-80b6e69f-23a1-4dc4-b4b6-a089463a5ef0" in namespace "emptydir-8806" to be "success or failure" Aug 21 19:02:14.408: INFO: Pod "pod-80b6e69f-23a1-4dc4-b4b6-a089463a5ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.538989ms Aug 21 19:02:16.412: INFO: Pod "pod-80b6e69f-23a1-4dc4-b4b6-a089463a5ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007068146s Aug 21 19:02:18.415: INFO: Pod "pod-80b6e69f-23a1-4dc4-b4b6-a089463a5ef0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010283909s Aug 21 19:02:20.419: INFO: Pod "pod-80b6e69f-23a1-4dc4-b4b6-a089463a5ef0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014663368s STEP: Saw pod success Aug 21 19:02:20.419: INFO: Pod "pod-80b6e69f-23a1-4dc4-b4b6-a089463a5ef0" satisfied condition "success or failure" Aug 21 19:02:20.422: INFO: Trying to get logs from node iruya-worker pod pod-80b6e69f-23a1-4dc4-b4b6-a089463a5ef0 container test-container: STEP: delete the pod Aug 21 19:02:20.470: INFO: Waiting for pod pod-80b6e69f-23a1-4dc4-b4b6-a089463a5ef0 to disappear Aug 21 19:02:20.485: INFO: Pod pod-80b6e69f-23a1-4dc4-b4b6-a089463a5ef0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:02:20.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8806" for this suite. Aug 21 19:02:26.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:02:26.586: INFO: namespace emptydir-8806 deletion completed in 6.098500411s • [SLOW TEST:12.247 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:02:26.587: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Aug 21 19:02:26.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4852' Aug 21 19:02:26.887: INFO: stderr: "" Aug 21 19:02:26.887: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Aug 21 19:02:27.900: INFO: Selector matched 1 pods for map[app:redis] Aug 21 19:02:27.900: INFO: Found 0 / 1 Aug 21 19:02:28.892: INFO: Selector matched 1 pods for map[app:redis] Aug 21 19:02:28.892: INFO: Found 0 / 1 Aug 21 19:02:29.891: INFO: Selector matched 1 pods for map[app:redis] Aug 21 19:02:29.891: INFO: Found 0 / 1 Aug 21 19:02:30.892: INFO: Selector matched 1 pods for map[app:redis] Aug 21 19:02:30.892: INFO: Found 1 / 1 Aug 21 19:02:30.892: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Aug 21 19:02:30.895: INFO: Selector matched 1 pods for map[app:redis] Aug 21 19:02:30.895: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 21 19:02:30.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-jt57p --namespace=kubectl-4852 -p {"metadata":{"annotations":{"x":"y"}}}' Aug 21 19:02:31.004: INFO: stderr: "" Aug 21 19:02:31.004: INFO: stdout: "pod/redis-master-jt57p patched\n" STEP: checking annotations Aug 21 19:02:31.009: INFO: Selector matched 1 pods for map[app:redis] Aug 21 19:02:31.009: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:02:31.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4852" for this suite. Aug 21 19:02:53.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:02:53.117: INFO: namespace kubectl-4852 deletion completed in 22.082382187s • [SLOW TEST:26.530 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:02:53.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Aug 21 19:02:53.731: INFO: created pod pod-service-account-defaultsa Aug 21 19:02:53.731: INFO: pod pod-service-account-defaultsa service account token volume mount: true Aug 21 19:02:53.739: INFO: created pod pod-service-account-mountsa Aug 21 19:02:53.739: INFO: pod pod-service-account-mountsa service account token volume mount: true Aug 21 19:02:53.782: INFO: created pod pod-service-account-nomountsa Aug 21 19:02:53.783: INFO: pod pod-service-account-nomountsa service account token volume mount: false Aug 21 19:02:53.793: INFO: created pod pod-service-account-defaultsa-mountspec Aug 21 19:02:53.793: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Aug 21 19:02:53.838: INFO: created pod pod-service-account-mountsa-mountspec Aug 21 19:02:53.838: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Aug 21 19:02:53.862: INFO: created pod pod-service-account-nomountsa-mountspec Aug 21 19:02:53.862: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Aug 21 19:02:53.933: INFO: created pod pod-service-account-defaultsa-nomountspec Aug 21 19:02:53.933: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Aug 21 19:02:53.939: INFO: created pod pod-service-account-mountsa-nomountspec Aug 21 19:02:53.939: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Aug 21 19:02:53.960: INFO: created pod pod-service-account-nomountsa-nomountspec Aug 21 19:02:53.960: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:02:53.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-7716" for this suite. Aug 21 19:03:24.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:03:24.217: INFO: namespace svcaccounts-7716 deletion completed in 30.201862817s • [SLOW TEST:31.099 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:03:24.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Aug 21 19:03:24.318: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:03:43.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4939" for this suite. Aug 21 19:03:49.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:03:49.785: INFO: namespace pods-4939 deletion completed in 6.087960643s • [SLOW TEST:25.567 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:03:49.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 19:03:49.873: INFO: Creating ReplicaSet my-hostname-basic-4877c66e-fb81-4d1e-877f-89b4d391ffbb Aug 21 19:03:49.898: INFO: Pod name my-hostname-basic-4877c66e-fb81-4d1e-877f-89b4d391ffbb: Found 0 pods out of 1 Aug 21 19:03:54.903: INFO: Pod name my-hostname-basic-4877c66e-fb81-4d1e-877f-89b4d391ffbb: Found 1 pods out of 1 Aug 21 19:03:54.903: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-4877c66e-fb81-4d1e-877f-89b4d391ffbb" is running Aug 21 19:03:54.906: INFO: Pod "my-hostname-basic-4877c66e-fb81-4d1e-877f-89b4d391ffbb-mz4q2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 19:03:49 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 19:03:53 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 19:03:53 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 19:03:49 +0000 UTC Reason: Message:}]) Aug 21 19:03:54.906: INFO: Trying to dial the pod Aug 21 19:03:59.918: INFO: Controller my-hostname-basic-4877c66e-fb81-4d1e-877f-89b4d391ffbb: Got expected result from replica 1 [my-hostname-basic-4877c66e-fb81-4d1e-877f-89b4d391ffbb-mz4q2]: "my-hostname-basic-4877c66e-fb81-4d1e-877f-89b4d391ffbb-mz4q2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:03:59.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-7775" for this suite. Aug 21 19:04:05.932: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:04:06.003: INFO: namespace replicaset-7775 deletion completed in 6.082418809s • [SLOW TEST:16.218 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:04:06.004: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-mzld STEP: Creating a pod to test atomic-volume-subpath Aug 21 19:04:06.097: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-mzld" in namespace "subpath-8085" to be "success or failure" Aug 21 19:04:06.122: INFO: Pod "pod-subpath-test-secret-mzld": Phase="Pending", Reason="", readiness=false. Elapsed: 24.474666ms Aug 21 19:04:08.126: INFO: Pod "pod-subpath-test-secret-mzld": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02867362s Aug 21 19:04:10.130: INFO: Pod "pod-subpath-test-secret-mzld": Phase="Running", Reason="", readiness=true. Elapsed: 4.0329504s Aug 21 19:04:12.134: INFO: Pod "pod-subpath-test-secret-mzld": Phase="Running", Reason="", readiness=true. Elapsed: 6.036784915s Aug 21 19:04:14.138: INFO: Pod "pod-subpath-test-secret-mzld": Phase="Running", Reason="", readiness=true. Elapsed: 8.040876217s Aug 21 19:04:16.143: INFO: Pod "pod-subpath-test-secret-mzld": Phase="Running", Reason="", readiness=true. Elapsed: 10.045524576s Aug 21 19:04:18.147: INFO: Pod "pod-subpath-test-secret-mzld": Phase="Running", Reason="", readiness=true. Elapsed: 12.049930503s Aug 21 19:04:20.152: INFO: Pod "pod-subpath-test-secret-mzld": Phase="Running", Reason="", readiness=true. Elapsed: 14.054193564s Aug 21 19:04:22.156: INFO: Pod "pod-subpath-test-secret-mzld": Phase="Running", Reason="", readiness=true. Elapsed: 16.058567282s Aug 21 19:04:24.160: INFO: Pod "pod-subpath-test-secret-mzld": Phase="Running", Reason="", readiness=true. Elapsed: 18.062909631s Aug 21 19:04:26.164: INFO: Pod "pod-subpath-test-secret-mzld": Phase="Running", Reason="", readiness=true. Elapsed: 20.067044789s Aug 21 19:04:28.168: INFO: Pod "pod-subpath-test-secret-mzld": Phase="Running", Reason="", readiness=true. Elapsed: 22.070892482s Aug 21 19:04:30.173: INFO: Pod "pod-subpath-test-secret-mzld": Phase="Running", Reason="", readiness=true. Elapsed: 24.075511718s Aug 21 19:04:32.177: INFO: Pod "pod-subpath-test-secret-mzld": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.079879277s STEP: Saw pod success Aug 21 19:04:32.177: INFO: Pod "pod-subpath-test-secret-mzld" satisfied condition "success or failure" Aug 21 19:04:32.181: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-mzld container test-container-subpath-secret-mzld: STEP: delete the pod Aug 21 19:04:32.214: INFO: Waiting for pod pod-subpath-test-secret-mzld to disappear Aug 21 19:04:32.269: INFO: Pod pod-subpath-test-secret-mzld no longer exists STEP: Deleting pod pod-subpath-test-secret-mzld Aug 21 19:04:32.269: INFO: Deleting pod "pod-subpath-test-secret-mzld" in namespace "subpath-8085" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:04:32.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8085" for this suite. Aug 21 19:04:38.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:04:38.376: INFO: namespace subpath-8085 deletion completed in 6.100460548s • [SLOW TEST:32.372 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:04:38.376: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-33def7f0-1512-4771-a738-42f5297c8715 STEP: Creating configMap with name cm-test-opt-upd-d2ad77b0-802a-4f27-9aec-6ab47fe5edbf STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-33def7f0-1512-4771-a738-42f5297c8715 STEP: Updating configmap cm-test-opt-upd-d2ad77b0-802a-4f27-9aec-6ab47fe5edbf STEP: Creating configMap with name cm-test-opt-create-f0956e8a-4485-4c8e-93dd-d37402f072bd STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:04:48.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3829" for this suite. Aug 21 19:05:10.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:05:10.942: INFO: namespace configmap-3829 deletion completed in 22.194508307s • [SLOW TEST:32.566 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:05:10.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Aug 21 19:05:15.716: INFO: Successfully updated pod "annotationupdateb49842b7-c3d9-4859-8380-ff9f015fe4ca" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:05:17.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3260" for this suite. Aug 21 19:05:39.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:05:39.856: INFO: namespace projected-3260 deletion completed in 22.094569993s • [SLOW TEST:28.913 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:05:39.857: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0821 19:05:52.756908 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 21 19:05:52.757: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:05:52.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3876" for this suite. Aug 21 19:06:01.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:06:01.121: INFO: namespace gc-3876 deletion completed in 8.143485358s • [SLOW TEST:21.264 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:06:01.121: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 21 19:06:06.284: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:06:06.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2486" for this suite. Aug 21 19:06:12.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:06:12.458: INFO: namespace container-runtime-2486 deletion completed in 6.098417386s • [SLOW TEST:11.337 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:06:12.459: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 21 19:06:12.823: INFO: Waiting up to 5m0s for pod "downward-api-bb58fdb2-652f-442c-9610-12c0ca56fddf" in namespace "downward-api-5165" to be "success or failure" Aug 21 19:06:12.966: INFO: Pod "downward-api-bb58fdb2-652f-442c-9610-12c0ca56fddf": Phase="Pending", Reason="", readiness=false. Elapsed: 142.8236ms Aug 21 19:06:14.970: INFO: Pod "downward-api-bb58fdb2-652f-442c-9610-12c0ca56fddf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146212355s Aug 21 19:06:16.974: INFO: Pod "downward-api-bb58fdb2-652f-442c-9610-12c0ca56fddf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.150212872s STEP: Saw pod success Aug 21 19:06:16.974: INFO: Pod "downward-api-bb58fdb2-652f-442c-9610-12c0ca56fddf" satisfied condition "success or failure" Aug 21 19:06:16.989: INFO: Trying to get logs from node iruya-worker2 pod downward-api-bb58fdb2-652f-442c-9610-12c0ca56fddf container dapi-container: STEP: delete the pod Aug 21 19:06:17.002: INFO: Waiting for pod downward-api-bb58fdb2-652f-442c-9610-12c0ca56fddf to disappear Aug 21 19:06:17.006: INFO: Pod downward-api-bb58fdb2-652f-442c-9610-12c0ca56fddf no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:06:17.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5165" for this suite. Aug 21 19:06:23.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:06:23.089: INFO: namespace downward-api-5165 deletion completed in 6.080011374s • [SLOW TEST:10.630 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:06:23.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 21 19:06:31.196: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 19:06:31.206: INFO: Pod pod-with-prestop-http-hook still exists Aug 21 19:06:33.206: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 19:06:33.210: INFO: Pod pod-with-prestop-http-hook still exists Aug 21 19:06:35.206: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Aug 21 19:06:35.210: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:06:35.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5253" for this suite. Aug 21 19:06:57.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:06:57.330: INFO: namespace container-lifecycle-hook-5253 deletion completed in 22.110647586s • [SLOW TEST:34.241 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:06:57.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:06:57.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5771" for this suite. Aug 21 19:07:19.489: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:07:19.561: INFO: namespace pods-5771 deletion completed in 22.093722572s • [SLOW TEST:22.230 seconds] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:07:19.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 21 19:07:19.628: INFO: Waiting up to 5m0s for pod "pod-89a55c55-afcd-488c-8bad-ac5879ecf2f1" in namespace "emptydir-9445" to be "success or failure" Aug 21 19:07:19.631: INFO: Pod "pod-89a55c55-afcd-488c-8bad-ac5879ecf2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.541029ms Aug 21 19:07:21.636: INFO: Pod "pod-89a55c55-afcd-488c-8bad-ac5879ecf2f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007673835s Aug 21 19:07:23.643: INFO: Pod "pod-89a55c55-afcd-488c-8bad-ac5879ecf2f1": Phase="Running", Reason="", readiness=true. Elapsed: 4.015102151s Aug 21 19:07:25.648: INFO: Pod "pod-89a55c55-afcd-488c-8bad-ac5879ecf2f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019908834s STEP: Saw pod success Aug 21 19:07:25.648: INFO: Pod "pod-89a55c55-afcd-488c-8bad-ac5879ecf2f1" satisfied condition "success or failure" Aug 21 19:07:25.651: INFO: Trying to get logs from node iruya-worker pod pod-89a55c55-afcd-488c-8bad-ac5879ecf2f1 container test-container: STEP: delete the pod Aug 21 19:07:25.675: INFO: Waiting for pod pod-89a55c55-afcd-488c-8bad-ac5879ecf2f1 to disappear Aug 21 19:07:25.685: INFO: Pod pod-89a55c55-afcd-488c-8bad-ac5879ecf2f1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:07:25.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9445" for this suite. Aug 21 19:07:31.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:07:31.769: INFO: namespace emptydir-9445 deletion completed in 6.080242743s • [SLOW TEST:12.207 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:07:31.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-cb4b4ca0-552d-430b-a577-29cbd973e03e STEP: Creating a pod to test consume configMaps Aug 21 19:07:31.969: INFO: Waiting up to 5m0s for pod "pod-configmaps-eb1e3b75-b5a6-428a-a959-bd74d05b3536" in namespace "configmap-2191" to be "success or failure" Aug 21 19:07:31.991: INFO: Pod "pod-configmaps-eb1e3b75-b5a6-428a-a959-bd74d05b3536": Phase="Pending", Reason="", readiness=false. Elapsed: 22.513325ms Aug 21 19:07:34.123: INFO: Pod "pod-configmaps-eb1e3b75-b5a6-428a-a959-bd74d05b3536": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153959028s Aug 21 19:07:36.127: INFO: Pod "pod-configmaps-eb1e3b75-b5a6-428a-a959-bd74d05b3536": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.157960413s STEP: Saw pod success Aug 21 19:07:36.127: INFO: Pod "pod-configmaps-eb1e3b75-b5a6-428a-a959-bd74d05b3536" satisfied condition "success or failure" Aug 21 19:07:36.129: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-eb1e3b75-b5a6-428a-a959-bd74d05b3536 container configmap-volume-test: STEP: delete the pod Aug 21 19:07:36.163: INFO: Waiting for pod pod-configmaps-eb1e3b75-b5a6-428a-a959-bd74d05b3536 to disappear Aug 21 19:07:36.169: INFO: Pod pod-configmaps-eb1e3b75-b5a6-428a-a959-bd74d05b3536 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:07:36.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2191" for this suite. Aug 21 19:07:42.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:07:42.270: INFO: namespace configmap-2191 deletion completed in 6.098387935s • [SLOW TEST:10.502 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:07:42.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:08:22.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9859" for this suite. Aug 21 19:08:28.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:08:28.480: INFO: namespace container-runtime-9859 deletion completed in 6.180524725s • [SLOW TEST:46.209 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:08:28.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 19:08:29.777: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Aug 21 19:08:29.901: INFO: Pod name sample-pod: Found 0 pods out of 1 Aug 21 19:08:34.904: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 21 19:08:34.904: INFO: Creating deployment "test-rolling-update-deployment" Aug 21 19:08:34.907: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Aug 21 19:08:34.917: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Aug 21 19:08:36.923: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Aug 21 19:08:36.926: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633714, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633714, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633714, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633714, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 19:08:38.928: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633714, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633714, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633714, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633714, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 19:08:40.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633714, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633714, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633714, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633714, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 19:08:43.011: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 21 19:08:43.079: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-49,SelfLink:/apis/apps/v1/namespaces/deployment-49/deployments/test-rolling-update-deployment,UID:4c7563b3-bdc3-4fde-8a9f-908f51c61d51,ResourceVersion:1618012,Generation:1,CreationTimestamp:2020-08-21 19:08:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-21 19:08:34 +0000 UTC 2020-08-21 19:08:34 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-21 19:08:41 +0000 UTC 2020-08-21 19:08:34 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 21 19:08:43.082: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-49,SelfLink:/apis/apps/v1/namespaces/deployment-49/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:148330fc-b945-49d1-987c-ce1587af98a8,ResourceVersion:1618001,Generation:1,CreationTimestamp:2020-08-21 19:08:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4c7563b3-bdc3-4fde-8a9f-908f51c61d51 0xc002234e37 0xc002234e38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 21 19:08:43.082: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Aug 21 19:08:43.082: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-49,SelfLink:/apis/apps/v1/namespaces/deployment-49/replicasets/test-rolling-update-controller,UID:a8fc6f41-68fd-4fd7-907e-0283713117ed,ResourceVersion:1618011,Generation:2,CreationTimestamp:2020-08-21 19:08:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 4c7563b3-bdc3-4fde-8a9f-908f51c61d51 0xc002234d4f 0xc002234d60}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 21 19:08:43.084: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-7n2gk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-7n2gk,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-49,SelfLink:/api/v1/namespaces/deployment-49/pods/test-rolling-update-deployment-79f6b9d75c-7n2gk,UID:f1b93433-1ab5-4502-92e0-95bab70b6d04,ResourceVersion:1618000,Generation:0,CreationTimestamp:2020-08-21 19:08:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 148330fc-b945-49d1-987c-ce1587af98a8 0xc002235a37 0xc002235a38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-k5qpz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-k5qpz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-k5qpz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002235ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002235ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:08:34 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:08:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:08:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:08:34 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.224,StartTime:2020-08-21 19:08:34 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-21 19:08:40 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://574981d738fe5baae2731c96c8b134dd1bb24f0629a51b23e6b23fe3b51259e4}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:08:43.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-49" for this suite. Aug 21 19:08:51.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:08:51.190: INFO: namespace deployment-49 deletion completed in 8.102184611s • [SLOW TEST:22.709 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:08:51.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Aug 21 19:08:51.247: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:09:00.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2165" for this suite. Aug 21 19:09:06.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:09:06.384: INFO: namespace init-container-2165 deletion completed in 6.154932682s • [SLOW TEST:15.194 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:09:06.385: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Aug 21 19:09:13.019: INFO: 10 pods remaining Aug 21 19:09:13.019: INFO: 9 pods has nil DeletionTimestamp Aug 21 19:09:13.019: INFO: Aug 21 19:09:14.124: INFO: 0 pods remaining Aug 21 19:09:14.124: INFO: 0 pods has nil DeletionTimestamp Aug 21 19:09:14.124: INFO: STEP: Gathering metrics W0821 19:09:15.053576 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 21 19:09:15.053: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:09:15.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8809" for this suite. Aug 21 19:09:21.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:09:21.429: INFO: namespace gc-8809 deletion completed in 6.372733721s • [SLOW TEST:15.044 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:09:21.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0821 19:09:52.029235 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 21 19:09:52.029: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:09:52.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-553" for this suite. Aug 21 19:09:58.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:09:58.397: INFO: namespace gc-553 deletion completed in 6.365161929s • [SLOW TEST:36.967 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:09:58.397: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Aug 21 19:09:58.560: INFO: PodSpec: initContainers in spec.initContainers Aug 21 19:10:52.732: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-05325521-c6d9-46ef-9a55-f76621445c3b", GenerateName:"", Namespace:"init-container-3753", SelfLink:"/api/v1/namespaces/init-container-3753/pods/pod-init-05325521-c6d9-46ef-9a55-f76621445c3b", UID:"be2f2a30-be28-4da2-9e35-f79039f31c93", ResourceVersion:"1618630", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63733633798, loc:(*time.Location)(0x7edea20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"560906109"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-9mn5r", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0014ba3c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9mn5r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9mn5r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-9mn5r", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001dc2798), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0028baae0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001dc2820)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001dc2840)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001dc2848), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001dc284c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633798, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633798, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633798, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733633798, loc:(*time.Location)(0x7edea20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.5", PodIP:"10.244.2.232", StartTime:(*v1.Time)(0xc0025d2a80), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c58e00)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002c58e70)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://2248ddf8e657de0b49e0e69b71ff4e24223b2f251887d48426011619cad8f523"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025d2ac0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0025d2aa0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:10:52.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3753" for this suite. Aug 21 19:11:14.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:11:14.869: INFO: namespace init-container-3753 deletion completed in 22.124148257s • [SLOW TEST:76.472 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:11:14.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Aug 21 19:11:14.918: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Aug 21 19:11:14.962: INFO: Waiting for terminating namespaces to be deleted... Aug 21 19:11:14.965: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Aug 21 19:11:14.970: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 21 19:11:14.970: INFO: Container kindnet-cni ready: true, restart count 0 Aug 21 19:11:14.970: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 21 19:11:14.970: INFO: Container kube-proxy ready: true, restart count 0 Aug 21 19:11:14.970: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Aug 21 19:11:14.976: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 21 19:11:14.976: INFO: Container kindnet-cni ready: true, restart count 0 Aug 21 19:11:14.976: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded) Aug 21 19:11:14.976: INFO: Container kube-proxy ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Aug 21 19:11:15.090: INFO: Pod kindnet-nkf5n requesting resource cpu=100m on Node iruya-worker Aug 21 19:11:15.090: INFO: Pod kindnet-xsdzz requesting resource cpu=100m on Node iruya-worker2 Aug 21 19:11:15.090: INFO: Pod kube-proxy-5zw8s requesting resource cpu=0m on Node iruya-worker Aug 21 19:11:15.090: INFO: Pod kube-proxy-b98qt requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-1a675638-786d-4873-b042-41dec9d97763.162d5e40c8983be7], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4799/filler-pod-1a675638-786d-4873-b042-41dec9d97763 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-1a675638-786d-4873-b042-41dec9d97763.162d5e4112432b8d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-1a675638-786d-4873-b042-41dec9d97763.162d5e415b4ec4e2], Reason = [Created], Message = [Created container filler-pod-1a675638-786d-4873-b042-41dec9d97763] STEP: Considering event: Type = [Normal], Name = [filler-pod-1a675638-786d-4873-b042-41dec9d97763.162d5e41775a59d8], Reason = [Started], Message = [Started container filler-pod-1a675638-786d-4873-b042-41dec9d97763] STEP: Considering event: Type = [Normal], Name = [filler-pod-3e9e074d-5886-4889-a895-df4489d14006.162d5e40c8337322], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4799/filler-pod-3e9e074d-5886-4889-a895-df4489d14006 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-3e9e074d-5886-4889-a895-df4489d14006.162d5e415f93169f], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3e9e074d-5886-4889-a895-df4489d14006.162d5e418f0792a6], Reason = [Created], Message = [Created container filler-pod-3e9e074d-5886-4889-a895-df4489d14006] STEP: Considering event: Type = [Normal], Name = [filler-pod-3e9e074d-5886-4889-a895-df4489d14006.162d5e419e3730e8], Reason = [Started], Message = [Started container filler-pod-3e9e074d-5886-4889-a895-df4489d14006] STEP: Considering event: Type = [Warning], Name = [additional-pod.162d5e41b7fb7b22], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:11:20.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4799" for this suite. Aug 21 19:11:26.287: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:11:26.401: INFO: namespace sched-pred-4799 deletion completed in 6.126442581s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.532 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:11:26.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2820/configmap-test-4d2264bf-b486-40ad-bac2-585c0f4376a1 STEP: Creating a pod to test consume configMaps Aug 21 19:11:26.539: INFO: Waiting up to 5m0s for pod "pod-configmaps-33493501-7f55-4a4c-bc3b-2d0235188fb4" in namespace "configmap-2820" to be "success or failure" Aug 21 19:11:26.611: INFO: Pod "pod-configmaps-33493501-7f55-4a4c-bc3b-2d0235188fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 71.771342ms Aug 21 19:11:28.750: INFO: Pod "pod-configmaps-33493501-7f55-4a4c-bc3b-2d0235188fb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.210050148s Aug 21 19:11:30.754: INFO: Pod "pod-configmaps-33493501-7f55-4a4c-bc3b-2d0235188fb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.214226027s STEP: Saw pod success Aug 21 19:11:30.754: INFO: Pod "pod-configmaps-33493501-7f55-4a4c-bc3b-2d0235188fb4" satisfied condition "success or failure" Aug 21 19:11:30.757: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-33493501-7f55-4a4c-bc3b-2d0235188fb4 container env-test: STEP: delete the pod Aug 21 19:11:30.792: INFO: Waiting for pod pod-configmaps-33493501-7f55-4a4c-bc3b-2d0235188fb4 to disappear Aug 21 19:11:30.807: INFO: Pod pod-configmaps-33493501-7f55-4a4c-bc3b-2d0235188fb4 no longer exists [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:11:30.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2820" for this suite. Aug 21 19:11:36.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:11:36.907: INFO: namespace configmap-2820 deletion completed in 6.096273406s • [SLOW TEST:10.504 seconds] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:11:36.907: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-4099, will wait for the garbage collector to delete the pods Aug 21 19:11:43.045: INFO: Deleting Job.batch foo took: 26.899172ms Aug 21 19:11:43.345: INFO: Terminating Job.batch foo pods took: 300.318401ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:12:23.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-4099" for this suite. Aug 21 19:12:31.402: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:12:31.513: INFO: namespace job-4099 deletion completed in 8.124653072s • [SLOW TEST:54.606 seconds] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:12:31.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-dbea4200-049f-40b3-9a53-be8911c0a9d0 STEP: Creating a pod to test consume configMaps Aug 21 19:12:31.601: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f8d2b14-9ce9-4624-a226-f77d8d899dbb" in namespace "configmap-4933" to be "success or failure" Aug 21 19:12:31.608: INFO: Pod "pod-configmaps-5f8d2b14-9ce9-4624-a226-f77d8d899dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 7.605137ms Aug 21 19:12:33.612: INFO: Pod "pod-configmaps-5f8d2b14-9ce9-4624-a226-f77d8d899dbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011370585s Aug 21 19:12:35.616: INFO: Pod "pod-configmaps-5f8d2b14-9ce9-4624-a226-f77d8d899dbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015698628s STEP: Saw pod success Aug 21 19:12:35.617: INFO: Pod "pod-configmaps-5f8d2b14-9ce9-4624-a226-f77d8d899dbb" satisfied condition "success or failure" Aug 21 19:12:35.619: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-5f8d2b14-9ce9-4624-a226-f77d8d899dbb container configmap-volume-test: STEP: delete the pod Aug 21 19:12:35.634: INFO: Waiting for pod pod-configmaps-5f8d2b14-9ce9-4624-a226-f77d8d899dbb to disappear Aug 21 19:12:35.638: INFO: Pod pod-configmaps-5f8d2b14-9ce9-4624-a226-f77d8d899dbb no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:12:35.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4933" for this suite. Aug 21 19:12:41.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:12:41.720: INFO: namespace configmap-4933 deletion completed in 6.079113061s • [SLOW TEST:10.206 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:12:41.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-9307 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-9307 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9307 Aug 21 19:12:41.836: INFO: Found 0 stateful pods, waiting for 1 Aug 21 19:12:51.841: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Aug 21 19:12:51.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9307 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 21 19:12:54.728: INFO: stderr: "I0821 19:12:54.599700 237 log.go:172] (0xc00010ee70) (0xc000702820) Create stream\nI0821 19:12:54.599734 237 log.go:172] (0xc00010ee70) (0xc000702820) Stream added, broadcasting: 1\nI0821 19:12:54.602902 237 log.go:172] (0xc00010ee70) Reply frame received for 1\nI0821 19:12:54.602973 237 log.go:172] (0xc00010ee70) (0xc000566000) Create stream\nI0821 19:12:54.602994 237 log.go:172] (0xc00010ee70) (0xc000566000) Stream added, broadcasting: 3\nI0821 19:12:54.604088 237 log.go:172] (0xc00010ee70) Reply frame received for 3\nI0821 19:12:54.604134 237 log.go:172] (0xc00010ee70) (0xc0005d6000) Create stream\nI0821 19:12:54.604156 237 log.go:172] (0xc00010ee70) (0xc0005d6000) Stream added, broadcasting: 5\nI0821 19:12:54.605414 237 log.go:172] (0xc00010ee70) Reply frame received for 5\nI0821 19:12:54.675186 237 log.go:172] (0xc00010ee70) Data frame received for 5\nI0821 19:12:54.675213 237 log.go:172] (0xc0005d6000) (5) Data frame handling\nI0821 19:12:54.675231 237 log.go:172] (0xc0005d6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0821 19:12:54.710290 237 log.go:172] (0xc00010ee70) Data frame received for 3\nI0821 19:12:54.710318 237 log.go:172] (0xc000566000) (3) Data frame handling\nI0821 19:12:54.710336 237 log.go:172] (0xc000566000) (3) Data frame sent\nI0821 19:12:54.710627 237 log.go:172] (0xc00010ee70) Data frame received for 5\nI0821 19:12:54.710667 237 log.go:172] (0xc0005d6000) (5) Data frame handling\nI0821 19:12:54.710712 237 log.go:172] (0xc00010ee70) Data frame received for 3\nI0821 19:12:54.710739 237 log.go:172] (0xc000566000) (3) Data frame handling\nI0821 19:12:54.713038 237 log.go:172] (0xc00010ee70) Data frame received for 1\nI0821 19:12:54.713068 237 log.go:172] (0xc000702820) (1) Data frame handling\nI0821 19:12:54.713081 237 log.go:172] (0xc000702820) (1) Data frame sent\nI0821 19:12:54.713095 237 log.go:172] (0xc00010ee70) (0xc000702820) Stream removed, broadcasting: 1\nI0821 19:12:54.713141 237 log.go:172] (0xc00010ee70) Go away received\nI0821 19:12:54.713570 237 log.go:172] (0xc00010ee70) (0xc000702820) Stream removed, broadcasting: 1\nI0821 19:12:54.713595 237 log.go:172] (0xc00010ee70) (0xc000566000) Stream removed, broadcasting: 3\nI0821 19:12:54.713606 237 log.go:172] (0xc00010ee70) (0xc0005d6000) Stream removed, broadcasting: 5\n" Aug 21 19:12:54.729: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 21 19:12:54.729: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 21 19:12:54.732: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Aug 21 19:13:04.737: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 21 19:13:04.737: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 19:13:04.787: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999637s Aug 21 19:13:05.791: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.960187399s Aug 21 19:13:06.796: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.955416922s Aug 21 19:13:07.801: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.950561398s Aug 21 19:13:08.806: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.945314476s Aug 21 19:13:09.811: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.940522712s Aug 21 19:13:10.815: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.935719415s Aug 21 19:13:11.820: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.931278099s Aug 21 19:13:12.824: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.926538723s Aug 21 19:13:13.829: INFO: Verifying statefulset ss doesn't scale past 1 for another 922.375556ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9307 Aug 21 19:13:14.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9307 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 21 19:13:15.015: INFO: stderr: "I0821 19:13:14.947697 270 log.go:172] (0xc00096e630) (0xc0002dea00) Create stream\nI0821 19:13:14.947744 270 log.go:172] (0xc00096e630) (0xc0002dea00) Stream added, broadcasting: 1\nI0821 19:13:14.949331 270 log.go:172] (0xc00096e630) Reply frame received for 1\nI0821 19:13:14.949366 270 log.go:172] (0xc00096e630) (0xc0002deaa0) Create stream\nI0821 19:13:14.949374 270 log.go:172] (0xc00096e630) (0xc0002deaa0) Stream added, broadcasting: 3\nI0821 19:13:14.950020 270 log.go:172] (0xc00096e630) Reply frame received for 3\nI0821 19:13:14.950044 270 log.go:172] (0xc00096e630) (0xc0005e1e00) Create stream\nI0821 19:13:14.950060 270 log.go:172] (0xc00096e630) (0xc0005e1e00) Stream added, broadcasting: 5\nI0821 19:13:14.950620 270 log.go:172] (0xc00096e630) Reply frame received for 5\nI0821 19:13:15.006326 270 log.go:172] (0xc00096e630) Data frame received for 3\nI0821 19:13:15.006368 270 log.go:172] (0xc0002deaa0) (3) Data frame handling\nI0821 19:13:15.006384 270 log.go:172] (0xc0002deaa0) (3) Data frame sent\nI0821 19:13:15.006392 270 log.go:172] (0xc00096e630) Data frame received for 3\nI0821 19:13:15.006400 270 log.go:172] (0xc0002deaa0) (3) Data frame handling\nI0821 19:13:15.006418 270 log.go:172] (0xc00096e630) Data frame received for 5\nI0821 19:13:15.006431 270 log.go:172] (0xc0005e1e00) (5) Data frame handling\nI0821 19:13:15.006442 270 log.go:172] (0xc0005e1e00) (5) Data frame sent\nI0821 19:13:15.006447 270 log.go:172] (0xc00096e630) Data frame received for 5\nI0821 19:13:15.006451 270 log.go:172] (0xc0005e1e00) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0821 19:13:15.007562 270 log.go:172] (0xc00096e630) Data frame received for 1\nI0821 19:13:15.007589 270 log.go:172] (0xc0002dea00) (1) Data frame handling\nI0821 19:13:15.007610 270 log.go:172] (0xc0002dea00) (1) Data frame sent\nI0821 19:13:15.007633 270 log.go:172] (0xc00096e630) (0xc0002dea00) Stream removed, broadcasting: 1\nI0821 19:13:15.007656 270 log.go:172] (0xc00096e630) Go away received\nI0821 19:13:15.007876 270 log.go:172] (0xc00096e630) (0xc0002dea00) Stream removed, broadcasting: 1\nI0821 19:13:15.007890 270 log.go:172] (0xc00096e630) (0xc0002deaa0) Stream removed, broadcasting: 3\nI0821 19:13:15.007897 270 log.go:172] (0xc00096e630) (0xc0005e1e00) Stream removed, broadcasting: 5\n" Aug 21 19:13:15.015: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 21 19:13:15.015: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 21 19:13:15.018: INFO: Found 1 stateful pods, waiting for 3 Aug 21 19:13:25.022: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 19:13:25.022: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 19:13:25.023: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Aug 21 19:13:25.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9307 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 21 19:13:25.249: INFO: stderr: "I0821 19:13:25.150652 291 log.go:172] (0xc0009e4420) (0xc00062e6e0) Create stream\nI0821 19:13:25.150704 291 log.go:172] (0xc0009e4420) (0xc00062e6e0) Stream added, broadcasting: 1\nI0821 19:13:25.154179 291 log.go:172] (0xc0009e4420) Reply frame received for 1\nI0821 19:13:25.154230 291 log.go:172] (0xc0009e4420) (0xc00069a320) Create stream\nI0821 19:13:25.154245 291 log.go:172] (0xc0009e4420) (0xc00069a320) Stream added, broadcasting: 3\nI0821 19:13:25.155179 291 log.go:172] (0xc0009e4420) Reply frame received for 3\nI0821 19:13:25.155203 291 log.go:172] (0xc0009e4420) (0xc00062e000) Create stream\nI0821 19:13:25.155211 291 log.go:172] (0xc0009e4420) (0xc00062e000) Stream added, broadcasting: 5\nI0821 19:13:25.156070 291 log.go:172] (0xc0009e4420) Reply frame received for 5\nI0821 19:13:25.240406 291 log.go:172] (0xc0009e4420) Data frame received for 3\nI0821 19:13:25.240431 291 log.go:172] (0xc00069a320) (3) Data frame handling\nI0821 19:13:25.240438 291 log.go:172] (0xc00069a320) (3) Data frame sent\nI0821 19:13:25.240444 291 log.go:172] (0xc0009e4420) Data frame received for 3\nI0821 19:13:25.240448 291 log.go:172] (0xc00069a320) (3) Data frame handling\nI0821 19:13:25.240504 291 log.go:172] (0xc0009e4420) Data frame received for 5\nI0821 19:13:25.240551 291 log.go:172] (0xc00062e000) (5) Data frame handling\nI0821 19:13:25.240576 291 log.go:172] (0xc00062e000) (5) Data frame sent\nI0821 19:13:25.240592 291 log.go:172] (0xc0009e4420) Data frame received for 5\nI0821 19:13:25.240604 291 log.go:172] (0xc00062e000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0821 19:13:25.242116 291 log.go:172] (0xc0009e4420) Data frame received for 1\nI0821 19:13:25.242157 291 log.go:172] (0xc00062e6e0) (1) Data frame handling\nI0821 19:13:25.242174 291 log.go:172] (0xc00062e6e0) (1) Data frame sent\nI0821 19:13:25.242196 291 log.go:172] (0xc0009e4420) (0xc00062e6e0) Stream removed, broadcasting: 1\nI0821 19:13:25.242212 291 log.go:172] (0xc0009e4420) Go away received\nI0821 19:13:25.242704 291 log.go:172] (0xc0009e4420) (0xc00062e6e0) Stream removed, broadcasting: 1\nI0821 19:13:25.242726 291 log.go:172] (0xc0009e4420) (0xc00069a320) Stream removed, broadcasting: 3\nI0821 19:13:25.242737 291 log.go:172] (0xc0009e4420) (0xc00062e000) Stream removed, broadcasting: 5\n" Aug 21 19:13:25.249: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 21 19:13:25.249: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 21 19:13:25.249: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9307 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 21 19:13:25.489: INFO: stderr: "I0821 19:13:25.376320 313 log.go:172] (0xc000140840) (0xc00046aaa0) Create stream\nI0821 19:13:25.376376 313 log.go:172] (0xc000140840) (0xc00046aaa0) Stream added, broadcasting: 1\nI0821 19:13:25.378795 313 log.go:172] (0xc000140840) Reply frame received for 1\nI0821 19:13:25.378840 313 log.go:172] (0xc000140840) (0xc0008fe000) Create stream\nI0821 19:13:25.378858 313 log.go:172] (0xc000140840) (0xc0008fe000) Stream added, broadcasting: 3\nI0821 19:13:25.379812 313 log.go:172] (0xc000140840) Reply frame received for 3\nI0821 19:13:25.379861 313 log.go:172] (0xc000140840) (0xc00091c000) Create stream\nI0821 19:13:25.379883 313 log.go:172] (0xc000140840) (0xc00091c000) Stream added, broadcasting: 5\nI0821 19:13:25.380872 313 log.go:172] (0xc000140840) Reply frame received for 5\nI0821 19:13:25.451408 313 log.go:172] (0xc000140840) Data frame received for 5\nI0821 19:13:25.451434 313 log.go:172] (0xc00091c000) (5) Data frame handling\nI0821 19:13:25.451453 313 log.go:172] (0xc00091c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0821 19:13:25.479908 313 log.go:172] (0xc000140840) Data frame received for 3\nI0821 19:13:25.479940 313 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0821 19:13:25.479960 313 log.go:172] (0xc0008fe000) (3) Data frame sent\nI0821 19:13:25.480142 313 log.go:172] (0xc000140840) Data frame received for 3\nI0821 19:13:25.480165 313 log.go:172] (0xc0008fe000) (3) Data frame handling\nI0821 19:13:25.480264 313 log.go:172] (0xc000140840) Data frame received for 5\nI0821 19:13:25.480292 313 log.go:172] (0xc00091c000) (5) Data frame handling\nI0821 19:13:25.482210 313 log.go:172] (0xc000140840) Data frame received for 1\nI0821 19:13:25.482244 313 log.go:172] (0xc00046aaa0) (1) Data frame handling\nI0821 19:13:25.482263 313 log.go:172] (0xc00046aaa0) (1) Data frame sent\nI0821 19:13:25.482282 313 log.go:172] (0xc000140840) (0xc00046aaa0) Stream removed, broadcasting: 1\nI0821 19:13:25.482302 313 log.go:172] (0xc000140840) Go away received\nI0821 19:13:25.482559 313 log.go:172] (0xc000140840) (0xc00046aaa0) Stream removed, broadcasting: 1\nI0821 19:13:25.482575 313 log.go:172] (0xc000140840) (0xc0008fe000) Stream removed, broadcasting: 3\nI0821 19:13:25.482583 313 log.go:172] (0xc000140840) (0xc00091c000) Stream removed, broadcasting: 5\n" Aug 21 19:13:25.489: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 21 19:13:25.489: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 21 19:13:25.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9307 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 21 19:13:25.780: INFO: stderr: "I0821 19:13:25.660965 333 log.go:172] (0xc000530420) (0xc000748640) Create stream\nI0821 19:13:25.661037 333 log.go:172] (0xc000530420) (0xc000748640) Stream added, broadcasting: 1\nI0821 19:13:25.666764 333 log.go:172] (0xc000530420) Reply frame received for 1\nI0821 19:13:25.666806 333 log.go:172] (0xc000530420) (0xc00077a3c0) Create stream\nI0821 19:13:25.666816 333 log.go:172] (0xc000530420) (0xc00077a3c0) Stream added, broadcasting: 3\nI0821 19:13:25.667697 333 log.go:172] (0xc000530420) Reply frame received for 3\nI0821 19:13:25.667722 333 log.go:172] (0xc000530420) (0xc000846000) Create stream\nI0821 19:13:25.667731 333 log.go:172] (0xc000530420) (0xc000846000) Stream added, broadcasting: 5\nI0821 19:13:25.668429 333 log.go:172] (0xc000530420) Reply frame received for 5\nI0821 19:13:25.725615 333 log.go:172] (0xc000530420) Data frame received for 5\nI0821 19:13:25.725640 333 log.go:172] (0xc000846000) (5) Data frame handling\nI0821 19:13:25.725654 333 log.go:172] (0xc000846000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0821 19:13:25.768459 333 log.go:172] (0xc000530420) Data frame received for 5\nI0821 19:13:25.768483 333 log.go:172] (0xc000846000) (5) Data frame handling\nI0821 19:13:25.768513 333 log.go:172] (0xc000530420) Data frame received for 3\nI0821 19:13:25.768543 333 log.go:172] (0xc00077a3c0) (3) Data frame handling\nI0821 19:13:25.768561 333 log.go:172] (0xc00077a3c0) (3) Data frame sent\nI0821 19:13:25.768570 333 log.go:172] (0xc000530420) Data frame received for 3\nI0821 19:13:25.768579 333 log.go:172] (0xc00077a3c0) (3) Data frame handling\nI0821 19:13:25.770722 333 log.go:172] (0xc000530420) Data frame received for 1\nI0821 19:13:25.770742 333 log.go:172] (0xc000748640) (1) Data frame handling\nI0821 19:13:25.770752 333 log.go:172] (0xc000748640) (1) Data frame sent\nI0821 19:13:25.770764 333 log.go:172] (0xc000530420) (0xc000748640) Stream removed, broadcasting: 1\nI0821 19:13:25.770849 333 log.go:172] (0xc000530420) Go away received\nI0821 19:13:25.771073 333 log.go:172] (0xc000530420) (0xc000748640) Stream removed, broadcasting: 1\nI0821 19:13:25.771090 333 log.go:172] (0xc000530420) (0xc00077a3c0) Stream removed, broadcasting: 3\nI0821 19:13:25.771101 333 log.go:172] (0xc000530420) (0xc000846000) Stream removed, broadcasting: 5\n" Aug 21 19:13:25.780: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 21 19:13:25.780: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 21 19:13:25.780: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 19:13:25.783: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Aug 21 19:13:35.792: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Aug 21 19:13:35.792: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Aug 21 19:13:35.792: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Aug 21 19:13:35.808: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999556s Aug 21 19:13:36.813: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990419309s Aug 21 19:13:37.818: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.985287548s Aug 21 19:13:38.823: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980412559s Aug 21 19:13:39.828: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975263957s Aug 21 19:13:40.833: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970033739s Aug 21 19:13:41.838: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.965214227s Aug 21 19:13:42.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.960691156s Aug 21 19:13:43.870: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.933020971s Aug 21 19:13:44.875: INFO: Verifying statefulset ss doesn't scale past 3 for another 927.946515ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9307 Aug 21 19:13:45.881: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9307 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 21 19:13:46.156: INFO: stderr: "I0821 19:13:46.031998 353 log.go:172] (0xc00094c420) (0xc00042a820) Create stream\nI0821 19:13:46.032066 353 log.go:172] (0xc00094c420) (0xc00042a820) Stream added, broadcasting: 1\nI0821 19:13:46.035580 353 log.go:172] (0xc00094c420) Reply frame received for 1\nI0821 19:13:46.035797 353 log.go:172] (0xc00094c420) (0xc00074c000) Create stream\nI0821 19:13:46.035940 353 log.go:172] (0xc00094c420) (0xc00074c000) Stream added, broadcasting: 3\nI0821 19:13:46.037754 353 log.go:172] (0xc00094c420) Reply frame received for 3\nI0821 19:13:46.037823 353 log.go:172] (0xc00094c420) (0xc00042a000) Create stream\nI0821 19:13:46.037844 353 log.go:172] (0xc00094c420) (0xc00042a000) Stream added, broadcasting: 5\nI0821 19:13:46.038914 353 log.go:172] (0xc00094c420) Reply frame received for 5\nI0821 19:13:46.147557 353 log.go:172] (0xc00094c420) Data frame received for 3\nI0821 19:13:46.147595 353 log.go:172] (0xc00074c000) (3) Data frame handling\nI0821 19:13:46.147611 353 log.go:172] (0xc00074c000) (3) Data frame sent\nI0821 19:13:46.147620 353 log.go:172] (0xc00094c420) Data frame received for 3\nI0821 19:13:46.147662 353 log.go:172] (0xc00094c420) Data frame received for 5\nI0821 19:13:46.147729 353 log.go:172] (0xc00042a000) (5) Data frame handling\nI0821 19:13:46.147768 353 log.go:172] (0xc00042a000) (5) Data frame sent\nI0821 19:13:46.147791 353 log.go:172] (0xc00094c420) Data frame received for 5\nI0821 19:13:46.147829 353 log.go:172] (0xc00042a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0821 19:13:46.147857 353 log.go:172] (0xc00074c000) (3) Data frame handling\nI0821 19:13:46.149458 353 log.go:172] (0xc00094c420) Data frame received for 1\nI0821 19:13:46.149495 353 log.go:172] (0xc00042a820) (1) Data frame handling\nI0821 19:13:46.149519 353 log.go:172] (0xc00042a820) (1) Data frame sent\nI0821 19:13:46.149559 353 log.go:172] (0xc00094c420) (0xc00042a820) Stream removed, broadcasting: 1\nI0821 19:13:46.149596 353 log.go:172] (0xc00094c420) Go away received\nI0821 19:13:46.149922 353 log.go:172] (0xc00094c420) (0xc00042a820) Stream removed, broadcasting: 1\nI0821 19:13:46.149943 353 log.go:172] (0xc00094c420) (0xc00074c000) Stream removed, broadcasting: 3\nI0821 19:13:46.149953 353 log.go:172] (0xc00094c420) (0xc00042a000) Stream removed, broadcasting: 5\n" Aug 21 19:13:46.156: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 21 19:13:46.156: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 21 19:13:46.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9307 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 21 19:13:46.376: INFO: stderr: "I0821 19:13:46.285847 373 log.go:172] (0xc000970420) (0xc00035c820) Create stream\nI0821 19:13:46.285909 373 log.go:172] (0xc000970420) (0xc00035c820) Stream added, broadcasting: 1\nI0821 19:13:46.289601 373 log.go:172] (0xc000970420) Reply frame received for 1\nI0821 19:13:46.289641 373 log.go:172] (0xc000970420) (0xc00035c000) Create stream\nI0821 19:13:46.289654 373 log.go:172] (0xc000970420) (0xc00035c000) Stream added, broadcasting: 3\nI0821 19:13:46.290616 373 log.go:172] (0xc000970420) Reply frame received for 3\nI0821 19:13:46.290657 373 log.go:172] (0xc000970420) (0xc00035c140) Create stream\nI0821 19:13:46.290669 373 log.go:172] (0xc000970420) (0xc00035c140) Stream added, broadcasting: 5\nI0821 19:13:46.291603 373 log.go:172] (0xc000970420) Reply frame received for 5\nI0821 19:13:46.366567 373 log.go:172] (0xc000970420) Data frame received for 3\nI0821 19:13:46.366663 373 log.go:172] (0xc00035c000) (3) Data frame handling\nI0821 19:13:46.366678 373 log.go:172] (0xc00035c000) (3) Data frame sent\nI0821 19:13:46.366735 373 log.go:172] (0xc000970420) Data frame received for 5\nI0821 19:13:46.366810 373 log.go:172] (0xc00035c140) (5) Data frame handling\nI0821 19:13:46.366842 373 log.go:172] (0xc00035c140) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0821 19:13:46.366869 373 log.go:172] (0xc000970420) Data frame received for 5\nI0821 19:13:46.366901 373 log.go:172] (0xc00035c140) (5) Data frame handling\nI0821 19:13:46.366963 373 log.go:172] (0xc000970420) Data frame received for 3\nI0821 19:13:46.367010 373 log.go:172] (0xc00035c000) (3) Data frame handling\nI0821 19:13:46.369870 373 log.go:172] (0xc000970420) Data frame received for 1\nI0821 19:13:46.369931 373 log.go:172] (0xc00035c820) (1) Data frame handling\nI0821 19:13:46.369954 373 log.go:172] (0xc00035c820) (1) Data frame sent\nI0821 19:13:46.369967 373 log.go:172] (0xc000970420) (0xc00035c820) Stream removed, broadcasting: 1\nI0821 19:13:46.370180 373 log.go:172] (0xc000970420) (0xc00035c820) Stream removed, broadcasting: 1\nI0821 19:13:46.370192 373 log.go:172] (0xc000970420) (0xc00035c000) Stream removed, broadcasting: 3\nI0821 19:13:46.370418 373 log.go:172] (0xc000970420) (0xc00035c140) Stream removed, broadcasting: 5\n" Aug 21 19:13:46.376: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 21 19:13:46.376: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 21 19:13:46.376: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9307 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 21 19:13:46.585: INFO: stderr: "I0821 19:13:46.500959 393 log.go:172] (0xc000140dc0) (0xc000684820) Create stream\nI0821 19:13:46.501025 393 log.go:172] (0xc000140dc0) (0xc000684820) Stream added, broadcasting: 1\nI0821 19:13:46.503478 393 log.go:172] (0xc000140dc0) Reply frame received for 1\nI0821 19:13:46.503509 393 log.go:172] (0xc000140dc0) (0xc000932000) Create stream\nI0821 19:13:46.503524 393 log.go:172] (0xc000140dc0) (0xc000932000) Stream added, broadcasting: 3\nI0821 19:13:46.504454 393 log.go:172] (0xc000140dc0) Reply frame received for 3\nI0821 19:13:46.504486 393 log.go:172] (0xc000140dc0) (0xc0006848c0) Create stream\nI0821 19:13:46.504497 393 log.go:172] (0xc000140dc0) (0xc0006848c0) Stream added, broadcasting: 5\nI0821 19:13:46.505487 393 log.go:172] (0xc000140dc0) Reply frame received for 5\nI0821 19:13:46.573795 393 log.go:172] (0xc000140dc0) Data frame received for 3\nI0821 19:13:46.573830 393 log.go:172] (0xc000932000) (3) Data frame handling\nI0821 19:13:46.573843 393 log.go:172] (0xc000932000) (3) Data frame sent\nI0821 19:13:46.573851 393 log.go:172] (0xc000140dc0) Data frame received for 3\nI0821 19:13:46.573860 393 log.go:172] (0xc000932000) (3) Data frame handling\nI0821 19:13:46.573890 393 log.go:172] (0xc000140dc0) Data frame received for 5\nI0821 19:13:46.573900 393 log.go:172] (0xc0006848c0) (5) Data frame handling\nI0821 19:13:46.573915 393 log.go:172] (0xc0006848c0) (5) Data frame sent\nI0821 19:13:46.573923 393 log.go:172] (0xc000140dc0) Data frame received for 5\nI0821 19:13:46.573928 393 log.go:172] (0xc0006848c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0821 19:13:46.575221 393 log.go:172] (0xc000140dc0) Data frame received for 1\nI0821 19:13:46.575252 393 log.go:172] (0xc000684820) (1) Data frame handling\nI0821 19:13:46.575268 393 log.go:172] (0xc000684820) (1) Data frame sent\nI0821 19:13:46.575291 393 log.go:172] (0xc000140dc0) (0xc000684820) Stream removed, broadcasting: 1\nI0821 19:13:46.575340 393 log.go:172] (0xc000140dc0) Go away received\nI0821 19:13:46.575644 393 log.go:172] (0xc000140dc0) (0xc000684820) Stream removed, broadcasting: 1\nI0821 19:13:46.575656 393 log.go:172] (0xc000140dc0) (0xc000932000) Stream removed, broadcasting: 3\nI0821 19:13:46.575662 393 log.go:172] (0xc000140dc0) (0xc0006848c0) Stream removed, broadcasting: 5\n" Aug 21 19:13:46.585: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 21 19:13:46.585: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 21 19:13:46.585: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Aug 21 19:14:06.624: INFO: Deleting all statefulset in ns statefulset-9307 Aug 21 19:14:06.627: INFO: Scaling statefulset ss to 0 Aug 21 19:14:06.636: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 19:14:06.639: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:14:06.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9307" for this suite. Aug 21 19:14:12.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:14:12.747: INFO: namespace statefulset-9307 deletion completed in 6.089451683s • [SLOW TEST:91.027 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:14:12.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2264 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Aug 21 19:14:12.838: INFO: Found 0 stateful pods, waiting for 3 Aug 21 19:14:22.979: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 19:14:22.980: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 19:14:22.980: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 21 19:14:32.841: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 19:14:32.841: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 19:14:32.841: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Aug 21 19:14:32.879: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Aug 21 19:14:42.925: INFO: Updating stateful set ss2 Aug 21 19:14:42.959: INFO: Waiting for Pod statefulset-2264/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 21 19:14:52.966: INFO: Waiting for Pod statefulset-2264/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Aug 21 19:15:03.315: INFO: Found 2 stateful pods, waiting for 3 Aug 21 19:15:13.321: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 19:15:13.321: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 19:15:13.321: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Aug 21 19:15:13.345: INFO: Updating stateful set ss2 Aug 21 19:15:13.374: INFO: Waiting for Pod statefulset-2264/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Aug 21 19:15:23.403: INFO: Updating stateful set ss2 Aug 21 19:15:23.441: INFO: Waiting for StatefulSet statefulset-2264/ss2 to complete update Aug 21 19:15:23.441: INFO: Waiting for Pod statefulset-2264/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Aug 21 19:15:33.449: INFO: Deleting all statefulset in ns statefulset-2264 Aug 21 19:15:33.451: INFO: Scaling statefulset ss2 to 0 Aug 21 19:15:53.473: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 19:15:53.476: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:15:53.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2264" for this suite. Aug 21 19:15:59.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:15:59.619: INFO: namespace statefulset-2264 deletion completed in 6.125482969s • [SLOW TEST:106.871 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:15:59.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-7948 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7948 to expose endpoints map[] Aug 21 19:15:59.736: INFO: successfully validated that service endpoint-test2 in namespace services-7948 exposes endpoints map[] (12.611746ms elapsed) STEP: Creating pod pod1 in namespace services-7948 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7948 to expose endpoints map[pod1:[80]] Aug 21 19:16:03.779: INFO: successfully validated that service endpoint-test2 in namespace services-7948 exposes endpoints map[pod1:[80]] (4.038162313s elapsed) STEP: Creating pod pod2 in namespace services-7948 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7948 to expose endpoints map[pod1:[80] pod2:[80]] Aug 21 19:16:06.848: INFO: successfully validated that service endpoint-test2 in namespace services-7948 exposes endpoints map[pod1:[80] pod2:[80]] (3.064069021s elapsed) STEP: Deleting pod pod1 in namespace services-7948 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7948 to expose endpoints map[pod2:[80]] Aug 21 19:16:07.873: INFO: successfully validated that service endpoint-test2 in namespace services-7948 exposes endpoints map[pod2:[80]] (1.021458197s elapsed) STEP: Deleting pod pod2 in namespace services-7948 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-7948 to expose endpoints map[] Aug 21 19:16:08.912: INFO: successfully validated that service endpoint-test2 in namespace services-7948 exposes endpoints map[] (1.033796808s elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:16:09.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7948" for this suite. Aug 21 19:16:15.282: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:16:15.375: INFO: namespace services-7948 deletion completed in 6.142142218s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:15.755 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:16:15.375: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Aug 21 19:16:15.483: INFO: Waiting up to 5m0s for pod "client-containers-839483ec-234e-4a24-933f-4cc74c7947a7" in namespace "containers-9390" to be "success or failure" Aug 21 19:16:15.500: INFO: Pod "client-containers-839483ec-234e-4a24-933f-4cc74c7947a7": Phase="Pending", Reason="", readiness=false. Elapsed: 17.427565ms Aug 21 19:16:17.505: INFO: Pod "client-containers-839483ec-234e-4a24-933f-4cc74c7947a7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021853155s Aug 21 19:16:19.509: INFO: Pod "client-containers-839483ec-234e-4a24-933f-4cc74c7947a7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02581605s STEP: Saw pod success Aug 21 19:16:19.509: INFO: Pod "client-containers-839483ec-234e-4a24-933f-4cc74c7947a7" satisfied condition "success or failure" Aug 21 19:16:19.512: INFO: Trying to get logs from node iruya-worker2 pod client-containers-839483ec-234e-4a24-933f-4cc74c7947a7 container test-container: STEP: delete the pod Aug 21 19:16:19.555: INFO: Waiting for pod client-containers-839483ec-234e-4a24-933f-4cc74c7947a7 to disappear Aug 21 19:16:19.559: INFO: Pod client-containers-839483ec-234e-4a24-933f-4cc74c7947a7 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:16:19.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9390" for this suite. Aug 21 19:16:25.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:16:25.699: INFO: namespace containers-9390 deletion completed in 6.136642695s • [SLOW TEST:10.324 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:16:25.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-c488562a-2ee0-4ecb-9cc7-50cb57d5a49a STEP: Creating a pod to test consume configMaps Aug 21 19:16:25.771: INFO: Waiting up to 5m0s for pod "pod-configmaps-413240f6-7c9a-4bf7-a01f-b979f4ee7b5e" in namespace "configmap-533" to be "success or failure" Aug 21 19:16:25.774: INFO: Pod "pod-configmaps-413240f6-7c9a-4bf7-a01f-b979f4ee7b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.150731ms Aug 21 19:16:27.778: INFO: Pod "pod-configmaps-413240f6-7c9a-4bf7-a01f-b979f4ee7b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006978864s Aug 21 19:16:29.782: INFO: Pod "pod-configmaps-413240f6-7c9a-4bf7-a01f-b979f4ee7b5e": Phase="Running", Reason="", readiness=true. Elapsed: 4.011069959s Aug 21 19:16:31.786: INFO: Pod "pod-configmaps-413240f6-7c9a-4bf7-a01f-b979f4ee7b5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015164796s STEP: Saw pod success Aug 21 19:16:31.787: INFO: Pod "pod-configmaps-413240f6-7c9a-4bf7-a01f-b979f4ee7b5e" satisfied condition "success or failure" Aug 21 19:16:31.789: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-413240f6-7c9a-4bf7-a01f-b979f4ee7b5e container configmap-volume-test: STEP: delete the pod Aug 21 19:16:31.812: INFO: Waiting for pod pod-configmaps-413240f6-7c9a-4bf7-a01f-b979f4ee7b5e to disappear Aug 21 19:16:31.817: INFO: Pod pod-configmaps-413240f6-7c9a-4bf7-a01f-b979f4ee7b5e no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:16:31.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-533" for this suite. Aug 21 19:16:37.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:16:37.930: INFO: namespace configmap-533 deletion completed in 6.108412476s • [SLOW TEST:12.230 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:16:37.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Aug 21 19:16:38.012: INFO: Waiting up to 5m0s for pod "var-expansion-7e810be5-3d70-4258-bf7e-2343e8ffa827" in namespace "var-expansion-9116" to be "success or failure" Aug 21 19:16:38.029: INFO: Pod "var-expansion-7e810be5-3d70-4258-bf7e-2343e8ffa827": Phase="Pending", Reason="", readiness=false. Elapsed: 16.971508ms Aug 21 19:16:40.033: INFO: Pod "var-expansion-7e810be5-3d70-4258-bf7e-2343e8ffa827": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021275287s Aug 21 19:16:42.037: INFO: Pod "var-expansion-7e810be5-3d70-4258-bf7e-2343e8ffa827": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02528284s STEP: Saw pod success Aug 21 19:16:42.037: INFO: Pod "var-expansion-7e810be5-3d70-4258-bf7e-2343e8ffa827" satisfied condition "success or failure" Aug 21 19:16:42.040: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-7e810be5-3d70-4258-bf7e-2343e8ffa827 container dapi-container: STEP: delete the pod Aug 21 19:16:42.054: INFO: Waiting for pod var-expansion-7e810be5-3d70-4258-bf7e-2343e8ffa827 to disappear Aug 21 19:16:42.058: INFO: Pod var-expansion-7e810be5-3d70-4258-bf7e-2343e8ffa827 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:16:42.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9116" for this suite. Aug 21 19:16:48.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:16:48.165: INFO: namespace var-expansion-9116 deletion completed in 6.103949525s • [SLOW TEST:10.236 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:16:48.166: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-e812220c-39e5-444a-afe9-4c9837f83d04 in namespace container-probe-35 Aug 21 19:16:52.236: INFO: Started pod busybox-e812220c-39e5-444a-afe9-4c9837f83d04 in namespace container-probe-35 STEP: checking the pod's current state and verifying that restartCount is present Aug 21 19:16:52.239: INFO: Initial restart count of pod busybox-e812220c-39e5-444a-afe9-4c9837f83d04 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:20:52.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-35" for this suite. Aug 21 19:20:59.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:20:59.083: INFO: namespace container-probe-35 deletion completed in 6.090610239s • [SLOW TEST:250.917 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:20:59.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Aug 21 19:20:59.148: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix433571714/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:20:59.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9439" for this suite. Aug 21 19:21:05.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:21:05.319: INFO: namespace kubectl-9439 deletion completed in 6.089602853s • [SLOW TEST:6.236 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:21:05.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-1028/secret-test-12fd2071-c664-43d4-b3e4-1b5cebbe9145 STEP: Creating a pod to test consume secrets Aug 21 19:21:05.404: INFO: Waiting up to 5m0s for pod "pod-configmaps-128c4eba-6d26-4778-aefa-6fbb2d7ca3c7" in namespace "secrets-1028" to be "success or failure" Aug 21 19:21:05.407: INFO: Pod "pod-configmaps-128c4eba-6d26-4778-aefa-6fbb2d7ca3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.496216ms Aug 21 19:21:07.412: INFO: Pod "pod-configmaps-128c4eba-6d26-4778-aefa-6fbb2d7ca3c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008433995s Aug 21 19:21:09.417: INFO: Pod "pod-configmaps-128c4eba-6d26-4778-aefa-6fbb2d7ca3c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012859586s STEP: Saw pod success Aug 21 19:21:09.417: INFO: Pod "pod-configmaps-128c4eba-6d26-4778-aefa-6fbb2d7ca3c7" satisfied condition "success or failure" Aug 21 19:21:09.420: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-128c4eba-6d26-4778-aefa-6fbb2d7ca3c7 container env-test: STEP: delete the pod Aug 21 19:21:09.439: INFO: Waiting for pod pod-configmaps-128c4eba-6d26-4778-aefa-6fbb2d7ca3c7 to disappear Aug 21 19:21:09.444: INFO: Pod pod-configmaps-128c4eba-6d26-4778-aefa-6fbb2d7ca3c7 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:21:09.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1028" for this suite. Aug 21 19:21:15.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:21:15.532: INFO: namespace secrets-1028 deletion completed in 6.084090185s • [SLOW TEST:10.213 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:21:15.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 21 19:21:15.603: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f6a4956-4a88-44aa-acf7-09b7a26a76a5" in namespace "projected-1036" to be "success or failure" Aug 21 19:21:15.611: INFO: Pod "downwardapi-volume-0f6a4956-4a88-44aa-acf7-09b7a26a76a5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.136294ms Aug 21 19:21:17.616: INFO: Pod "downwardapi-volume-0f6a4956-4a88-44aa-acf7-09b7a26a76a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012577013s Aug 21 19:21:19.620: INFO: Pod "downwardapi-volume-0f6a4956-4a88-44aa-acf7-09b7a26a76a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017028166s STEP: Saw pod success Aug 21 19:21:19.620: INFO: Pod "downwardapi-volume-0f6a4956-4a88-44aa-acf7-09b7a26a76a5" satisfied condition "success or failure" Aug 21 19:21:19.624: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-0f6a4956-4a88-44aa-acf7-09b7a26a76a5 container client-container: STEP: delete the pod Aug 21 19:21:19.646: INFO: Waiting for pod downwardapi-volume-0f6a4956-4a88-44aa-acf7-09b7a26a76a5 to disappear Aug 21 19:21:19.650: INFO: Pod downwardapi-volume-0f6a4956-4a88-44aa-acf7-09b7a26a76a5 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:21:19.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1036" for this suite. Aug 21 19:21:25.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:21:25.757: INFO: namespace projected-1036 deletion completed in 6.103066293s • [SLOW TEST:10.225 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:21:25.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Aug 21 19:21:25.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2150' Aug 21 19:21:26.072: INFO: stderr: "" Aug 21 19:21:26.072: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 21 19:21:26.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2150' Aug 21 19:21:26.199: INFO: stderr: "" Aug 21 19:21:26.199: INFO: stdout: "update-demo-nautilus-cg865 update-demo-nautilus-gpx84 " Aug 21 19:21:26.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cg865 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2150' Aug 21 19:21:26.299: INFO: stderr: "" Aug 21 19:21:26.299: INFO: stdout: "" Aug 21 19:21:26.299: INFO: update-demo-nautilus-cg865 is created but not running Aug 21 19:21:31.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2150' Aug 21 19:21:31.395: INFO: stderr: "" Aug 21 19:21:31.395: INFO: stdout: "update-demo-nautilus-cg865 update-demo-nautilus-gpx84 " Aug 21 19:21:31.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cg865 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2150' Aug 21 19:21:31.495: INFO: stderr: "" Aug 21 19:21:31.495: INFO: stdout: "true" Aug 21 19:21:31.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cg865 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2150' Aug 21 19:21:31.585: INFO: stderr: "" Aug 21 19:21:31.585: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 19:21:31.585: INFO: validating pod update-demo-nautilus-cg865 Aug 21 19:21:31.590: INFO: got data: { "image": "nautilus.jpg" } Aug 21 19:21:31.590: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 19:21:31.590: INFO: update-demo-nautilus-cg865 is verified up and running Aug 21 19:21:31.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpx84 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2150' Aug 21 19:21:31.675: INFO: stderr: "" Aug 21 19:21:31.675: INFO: stdout: "true" Aug 21 19:21:31.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-gpx84 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2150' Aug 21 19:21:31.765: INFO: stderr: "" Aug 21 19:21:31.765: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 19:21:31.765: INFO: validating pod update-demo-nautilus-gpx84 Aug 21 19:21:31.769: INFO: got data: { "image": "nautilus.jpg" } Aug 21 19:21:31.769: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 19:21:31.769: INFO: update-demo-nautilus-gpx84 is verified up and running STEP: scaling down the replication controller Aug 21 19:21:31.771: INFO: scanned /root for discovery docs: Aug 21 19:21:31.771: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2150' Aug 21 19:21:32.935: INFO: stderr: "" Aug 21 19:21:32.935: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 21 19:21:32.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2150' Aug 21 19:21:33.062: INFO: stderr: "" Aug 21 19:21:33.062: INFO: stdout: "update-demo-nautilus-cg865 update-demo-nautilus-gpx84 " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 21 19:21:38.063: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2150' Aug 21 19:21:38.164: INFO: stderr: "" Aug 21 19:21:38.164: INFO: stdout: "update-demo-nautilus-cg865 update-demo-nautilus-gpx84 " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 21 19:21:43.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2150' Aug 21 19:21:43.269: INFO: stderr: "" Aug 21 19:21:43.269: INFO: stdout: "update-demo-nautilus-cg865 update-demo-nautilus-gpx84 " STEP: Replicas for name=update-demo: expected=1 actual=2 Aug 21 19:21:48.270: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2150' Aug 21 19:21:48.372: INFO: stderr: "" Aug 21 19:21:48.372: INFO: stdout: "update-demo-nautilus-cg865 " Aug 21 19:21:48.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cg865 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2150' Aug 21 19:21:48.468: INFO: stderr: "" Aug 21 19:21:48.468: INFO: stdout: "true" Aug 21 19:21:48.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cg865 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2150' Aug 21 19:21:48.555: INFO: stderr: "" Aug 21 19:21:48.555: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 19:21:48.555: INFO: validating pod update-demo-nautilus-cg865 Aug 21 19:21:48.557: INFO: got data: { "image": "nautilus.jpg" } Aug 21 19:21:48.557: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 19:21:48.557: INFO: update-demo-nautilus-cg865 is verified up and running STEP: scaling up the replication controller Aug 21 19:21:48.559: INFO: scanned /root for discovery docs: Aug 21 19:21:48.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2150' Aug 21 19:21:49.666: INFO: stderr: "" Aug 21 19:21:49.666: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 21 19:21:49.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2150' Aug 21 19:21:49.757: INFO: stderr: "" Aug 21 19:21:49.757: INFO: stdout: "update-demo-nautilus-cg865 update-demo-nautilus-wksgr " Aug 21 19:21:49.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cg865 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2150' Aug 21 19:21:49.845: INFO: stderr: "" Aug 21 19:21:49.845: INFO: stdout: "true" Aug 21 19:21:49.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cg865 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2150' Aug 21 19:21:49.949: INFO: stderr: "" Aug 21 19:21:49.949: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 19:21:49.949: INFO: validating pod update-demo-nautilus-cg865 Aug 21 19:21:49.952: INFO: got data: { "image": "nautilus.jpg" } Aug 21 19:21:49.952: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 19:21:49.952: INFO: update-demo-nautilus-cg865 is verified up and running Aug 21 19:21:49.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wksgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2150' Aug 21 19:21:50.045: INFO: stderr: "" Aug 21 19:21:50.045: INFO: stdout: "" Aug 21 19:21:50.045: INFO: update-demo-nautilus-wksgr is created but not running Aug 21 19:21:55.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2150' Aug 21 19:21:55.141: INFO: stderr: "" Aug 21 19:21:55.141: INFO: stdout: "update-demo-nautilus-cg865 update-demo-nautilus-wksgr " Aug 21 19:21:55.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cg865 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2150' Aug 21 19:21:55.251: INFO: stderr: "" Aug 21 19:21:55.251: INFO: stdout: "true" Aug 21 19:21:55.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cg865 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2150' Aug 21 19:21:55.357: INFO: stderr: "" Aug 21 19:21:55.357: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 19:21:55.357: INFO: validating pod update-demo-nautilus-cg865 Aug 21 19:21:55.360: INFO: got data: { "image": "nautilus.jpg" } Aug 21 19:21:55.360: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 19:21:55.360: INFO: update-demo-nautilus-cg865 is verified up and running Aug 21 19:21:55.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wksgr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2150' Aug 21 19:21:55.447: INFO: stderr: "" Aug 21 19:21:55.447: INFO: stdout: "true" Aug 21 19:21:55.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-wksgr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2150' Aug 21 19:21:55.533: INFO: stderr: "" Aug 21 19:21:55.533: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 19:21:55.533: INFO: validating pod update-demo-nautilus-wksgr Aug 21 19:21:55.537: INFO: got data: { "image": "nautilus.jpg" } Aug 21 19:21:55.537: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 19:21:55.537: INFO: update-demo-nautilus-wksgr is verified up and running STEP: using delete to clean up resources Aug 21 19:21:55.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2150' Aug 21 19:21:55.636: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 21 19:21:55.636: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 21 19:21:55.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2150' Aug 21 19:21:55.738: INFO: stderr: "No resources found.\n" Aug 21 19:21:55.738: INFO: stdout: "" Aug 21 19:21:55.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2150 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 21 19:21:55.869: INFO: stderr: "" Aug 21 19:21:55.869: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:21:55.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2150" for this suite. Aug 21 19:22:17.921: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:22:17.995: INFO: namespace kubectl-2150 deletion completed in 22.112515296s • [SLOW TEST:52.238 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:22:17.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-e97fb5f8-af46-44f0-ae42-6ce189a39378 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:22:24.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2333" for this suite. Aug 21 19:22:47.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:22:47.353: INFO: namespace configmap-2333 deletion completed in 23.200488162s • [SLOW TEST:29.357 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:22:47.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-87cba0c1-65f0-4c9f-8115-22960aa2c31b STEP: Creating a pod to test consume secrets Aug 21 19:22:47.440: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bd9294a7-66ef-48f4-93fb-fbc1fa253454" in namespace "projected-7211" to be "success or failure" Aug 21 19:22:47.443: INFO: Pod "pod-projected-secrets-bd9294a7-66ef-48f4-93fb-fbc1fa253454": Phase="Pending", Reason="", readiness=false. Elapsed: 2.538663ms Aug 21 19:22:49.508: INFO: Pod "pod-projected-secrets-bd9294a7-66ef-48f4-93fb-fbc1fa253454": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068059614s Aug 21 19:22:51.513: INFO: Pod "pod-projected-secrets-bd9294a7-66ef-48f4-93fb-fbc1fa253454": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072863614s STEP: Saw pod success Aug 21 19:22:51.513: INFO: Pod "pod-projected-secrets-bd9294a7-66ef-48f4-93fb-fbc1fa253454" satisfied condition "success or failure" Aug 21 19:22:51.516: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-bd9294a7-66ef-48f4-93fb-fbc1fa253454 container projected-secret-volume-test: STEP: delete the pod Aug 21 19:22:51.574: INFO: Waiting for pod pod-projected-secrets-bd9294a7-66ef-48f4-93fb-fbc1fa253454 to disappear Aug 21 19:22:51.581: INFO: Pod pod-projected-secrets-bd9294a7-66ef-48f4-93fb-fbc1fa253454 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:22:51.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7211" for this suite. Aug 21 19:22:57.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:22:57.751: INFO: namespace projected-7211 deletion completed in 6.167756472s • [SLOW TEST:10.399 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:22:57.752: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Aug 21 19:22:57.835: INFO: namespace kubectl-9306 Aug 21 19:22:57.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9306' Aug 21 19:23:00.597: INFO: stderr: "" Aug 21 19:23:00.597: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Aug 21 19:23:01.601: INFO: Selector matched 1 pods for map[app:redis] Aug 21 19:23:01.601: INFO: Found 0 / 1 Aug 21 19:23:04.820: INFO: Selector matched 1 pods for map[app:redis] Aug 21 19:23:04.820: INFO: Found 0 / 1 Aug 21 19:23:05.602: INFO: Selector matched 1 pods for map[app:redis] Aug 21 19:23:05.602: INFO: Found 0 / 1 Aug 21 19:23:06.602: INFO: Selector matched 1 pods for map[app:redis] Aug 21 19:23:06.602: INFO: Found 0 / 1 Aug 21 19:23:07.602: INFO: Selector matched 1 pods for map[app:redis] Aug 21 19:23:07.602: INFO: Found 1 / 1 Aug 21 19:23:07.602: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 21 19:23:07.605: INFO: Selector matched 1 pods for map[app:redis] Aug 21 19:23:07.605: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Aug 21 19:23:07.605: INFO: wait on redis-master startup in kubectl-9306 Aug 21 19:23:07.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-76kzc redis-master --namespace=kubectl-9306' Aug 21 19:23:07.715: INFO: stderr: "" Aug 21 19:23:07.715: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 21 Aug 19:23:06.217 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Aug 19:23:06.217 # Server started, Redis version 3.2.12\n1:M 21 Aug 19:23:06.217 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Aug 19:23:06.217 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Aug 21 19:23:07.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9306' Aug 21 19:23:07.862: INFO: stderr: "" Aug 21 19:23:07.862: INFO: stdout: "service/rm2 exposed\n" Aug 21 19:23:07.869: INFO: Service rm2 in namespace kubectl-9306 found. STEP: exposing service Aug 21 19:23:09.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9306' Aug 21 19:23:10.051: INFO: stderr: "" Aug 21 19:23:10.051: INFO: stdout: "service/rm3 exposed\n" Aug 21 19:23:10.066: INFO: Service rm3 in namespace kubectl-9306 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:23:12.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9306" for this suite. Aug 21 19:23:34.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:23:34.179: INFO: namespace kubectl-9306 deletion completed in 22.101737257s • [SLOW TEST:36.428 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:23:34.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Aug 21 19:23:34.267: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:23:42.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2965" for this suite. Aug 21 19:24:06.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:24:06.623: INFO: namespace init-container-2965 deletion completed in 24.15798703s • [SLOW TEST:32.443 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:24:06.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-cb48362a-f95b-41f9-b4f9-aabfc03bf523 STEP: Creating a pod to test consume secrets Aug 21 19:24:06.712: INFO: Waiting up to 5m0s for pod "pod-secrets-df7b1efd-f37e-4d84-b152-7529ab33ee13" in namespace "secrets-5150" to be "success or failure" Aug 21 19:24:06.716: INFO: Pod "pod-secrets-df7b1efd-f37e-4d84-b152-7529ab33ee13": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00249ms Aug 21 19:24:08.721: INFO: Pod "pod-secrets-df7b1efd-f37e-4d84-b152-7529ab33ee13": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008477752s Aug 21 19:24:10.725: INFO: Pod "pod-secrets-df7b1efd-f37e-4d84-b152-7529ab33ee13": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0128438s STEP: Saw pod success Aug 21 19:24:10.725: INFO: Pod "pod-secrets-df7b1efd-f37e-4d84-b152-7529ab33ee13" satisfied condition "success or failure" Aug 21 19:24:10.728: INFO: Trying to get logs from node iruya-worker pod pod-secrets-df7b1efd-f37e-4d84-b152-7529ab33ee13 container secret-volume-test: STEP: delete the pod Aug 21 19:24:10.748: INFO: Waiting for pod pod-secrets-df7b1efd-f37e-4d84-b152-7529ab33ee13 to disappear Aug 21 19:24:10.753: INFO: Pod pod-secrets-df7b1efd-f37e-4d84-b152-7529ab33ee13 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:24:10.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5150" for this suite. Aug 21 19:24:16.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:24:16.876: INFO: namespace secrets-5150 deletion completed in 6.120588706s • [SLOW TEST:10.253 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:24:16.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 19:24:16.923: INFO: Creating deployment "test-recreate-deployment" Aug 21 19:24:16.982: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Aug 21 19:24:16.997: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Aug 21 19:24:19.005: INFO: Waiting deployment "test-recreate-deployment" to complete Aug 21 19:24:19.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733634657, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733634657, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733634657, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733634656, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 19:24:21.012: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Aug 21 19:24:21.020: INFO: Updating deployment test-recreate-deployment Aug 21 19:24:21.020: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 21 19:24:21.676: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-5686,SelfLink:/apis/apps/v1/namespaces/deployment-5686/deployments/test-recreate-deployment,UID:d2b36f61-371d-4823-ad28-9bf52c4ce894,ResourceVersion:1621351,Generation:2,CreationTimestamp:2020-08-21 19:24:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-08-21 19:24:21 +0000 UTC 2020-08-21 19:24:21 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-08-21 19:24:21 +0000 UTC 2020-08-21 19:24:16 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Aug 21 19:24:21.698: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-5686,SelfLink:/apis/apps/v1/namespaces/deployment-5686/replicasets/test-recreate-deployment-5c8c9cc69d,UID:0d27291e-a2f0-468e-aaa3-36720a34961e,ResourceVersion:1621350,Generation:1,CreationTimestamp:2020-08-21 19:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d2b36f61-371d-4823-ad28-9bf52c4ce894 0xc001051e67 0xc001051e68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 21 19:24:21.698: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Aug 21 19:24:21.699: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-5686,SelfLink:/apis/apps/v1/namespaces/deployment-5686/replicasets/test-recreate-deployment-6df85df6b9,UID:7e8a3ca8-4a68-4a38-9ec8-f3d938e301e8,ResourceVersion:1621341,Generation:2,CreationTimestamp:2020-08-21 19:24:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment d2b36f61-371d-4823-ad28-9bf52c4ce894 0xc001051f37 0xc001051f38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 21 19:24:21.702: INFO: Pod "test-recreate-deployment-5c8c9cc69d-bsr7c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-bsr7c,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-5686,SelfLink:/api/v1/namespaces/deployment-5686/pods/test-recreate-deployment-5c8c9cc69d-bsr7c,UID:14e98df7-3fc2-4d46-b24a-c415bcb397c1,ResourceVersion:1621352,Generation:0,CreationTimestamp:2020-08-21 19:24:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 0d27291e-a2f0-468e-aaa3-36720a34961e 0xc002b12b77 0xc002b12b78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hmgwh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hmgwh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hmgwh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b12bf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b12c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:24:21 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:24:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:24:21 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:24:21 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-21 19:24:21 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:24:21.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5686" for this suite. Aug 21 19:24:27.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:24:27.807: INFO: namespace deployment-5686 deletion completed in 6.102259193s • [SLOW TEST:10.930 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:24:27.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 21 19:24:27.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-8456' Aug 21 19:24:27.979: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 21 19:24:27.979: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Aug 21 19:24:30.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8456' Aug 21 19:24:30.150: INFO: stderr: "" Aug 21 19:24:30.150: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:24:30.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8456" for this suite. Aug 21 19:26:32.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:26:32.240: INFO: namespace kubectl-8456 deletion completed in 2m2.086623158s • [SLOW TEST:124.432 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:26:32.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-120a064e-ccf9-4ca2-8438-3ca8156efe17 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-120a064e-ccf9-4ca2-8438-3ca8156efe17 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:27:40.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3650" for this suite. Aug 21 19:28:02.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:28:02.805: INFO: namespace projected-3650 deletion completed in 22.103470233s • [SLOW TEST:90.565 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:28:02.806: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Aug 21 19:28:02.906: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:28:02.921: INFO: Number of nodes with available pods: 0 Aug 21 19:28:02.922: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:28:03.930: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:28:03.933: INFO: Number of nodes with available pods: 0 Aug 21 19:28:03.933: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:28:05.023: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:28:05.257: INFO: Number of nodes with available pods: 0 Aug 21 19:28:05.257: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:28:05.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:28:05.930: INFO: Number of nodes with available pods: 0 Aug 21 19:28:05.931: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:28:06.943: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:28:06.946: INFO: Number of nodes with available pods: 0 Aug 21 19:28:06.946: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:28:07.927: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:28:07.931: INFO: Number of nodes with available pods: 2 Aug 21 19:28:07.931: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Aug 21 19:28:07.948: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:28:07.966: INFO: Number of nodes with available pods: 2 Aug 21 19:28:07.966: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4993, will wait for the garbage collector to delete the pods Aug 21 19:28:09.055: INFO: Deleting DaemonSet.extensions daemon-set took: 4.929024ms Aug 21 19:28:09.455: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.288256ms Aug 21 19:28:23.658: INFO: Number of nodes with available pods: 0 Aug 21 19:28:23.658: INFO: Number of running nodes: 0, number of available pods: 0 Aug 21 19:28:23.662: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4993/daemonsets","resourceVersion":"1621969"},"items":null} Aug 21 19:28:23.665: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4993/pods","resourceVersion":"1621969"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:28:23.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4993" for this suite. Aug 21 19:28:29.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:28:29.766: INFO: namespace daemonsets-4993 deletion completed in 6.089262325s • [SLOW TEST:26.960 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:28:29.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Aug 21 19:28:29.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4260' Aug 21 19:28:30.092: INFO: stderr: "" Aug 21 19:28:30.092: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 21 19:28:30.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4260' Aug 21 19:28:30.195: INFO: stderr: "" Aug 21 19:28:30.195: INFO: stdout: "update-demo-nautilus-cgzp4 update-demo-nautilus-lw6vp " Aug 21 19:28:30.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgzp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4260' Aug 21 19:28:30.291: INFO: stderr: "" Aug 21 19:28:30.291: INFO: stdout: "" Aug 21 19:28:30.291: INFO: update-demo-nautilus-cgzp4 is created but not running Aug 21 19:28:35.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4260' Aug 21 19:28:35.385: INFO: stderr: "" Aug 21 19:28:35.385: INFO: stdout: "update-demo-nautilus-cgzp4 update-demo-nautilus-lw6vp " Aug 21 19:28:35.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgzp4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4260' Aug 21 19:28:35.480: INFO: stderr: "" Aug 21 19:28:35.480: INFO: stdout: "true" Aug 21 19:28:35.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cgzp4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4260' Aug 21 19:28:35.569: INFO: stderr: "" Aug 21 19:28:35.569: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 19:28:35.569: INFO: validating pod update-demo-nautilus-cgzp4 Aug 21 19:28:35.573: INFO: got data: { "image": "nautilus.jpg" } Aug 21 19:28:35.573: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 19:28:35.573: INFO: update-demo-nautilus-cgzp4 is verified up and running Aug 21 19:28:35.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lw6vp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4260' Aug 21 19:28:35.669: INFO: stderr: "" Aug 21 19:28:35.669: INFO: stdout: "true" Aug 21 19:28:35.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lw6vp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4260' Aug 21 19:28:35.765: INFO: stderr: "" Aug 21 19:28:35.765: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 19:28:35.765: INFO: validating pod update-demo-nautilus-lw6vp Aug 21 19:28:35.768: INFO: got data: { "image": "nautilus.jpg" } Aug 21 19:28:35.768: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 19:28:35.768: INFO: update-demo-nautilus-lw6vp is verified up and running STEP: rolling-update to new replication controller Aug 21 19:28:35.770: INFO: scanned /root for discovery docs: Aug 21 19:28:35.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4260' Aug 21 19:28:58.337: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Aug 21 19:28:58.337: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 21 19:28:58.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4260' Aug 21 19:28:58.443: INFO: stderr: "" Aug 21 19:28:58.443: INFO: stdout: "update-demo-kitten-htspw update-demo-kitten-m6ln8 " Aug 21 19:28:58.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-htspw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4260' Aug 21 19:28:58.531: INFO: stderr: "" Aug 21 19:28:58.531: INFO: stdout: "true" Aug 21 19:28:58.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-htspw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4260' Aug 21 19:28:58.617: INFO: stderr: "" Aug 21 19:28:58.617: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Aug 21 19:28:58.617: INFO: validating pod update-demo-kitten-htspw Aug 21 19:28:58.621: INFO: got data: { "image": "kitten.jpg" } Aug 21 19:28:58.621: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Aug 21 19:28:58.621: INFO: update-demo-kitten-htspw is verified up and running Aug 21 19:28:58.621: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-m6ln8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4260' Aug 21 19:28:58.730: INFO: stderr: "" Aug 21 19:28:58.730: INFO: stdout: "true" Aug 21 19:28:58.730: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-m6ln8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4260' Aug 21 19:28:58.822: INFO: stderr: "" Aug 21 19:28:58.823: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Aug 21 19:28:58.823: INFO: validating pod update-demo-kitten-m6ln8 Aug 21 19:28:58.827: INFO: got data: { "image": "kitten.jpg" } Aug 21 19:28:58.827: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Aug 21 19:28:58.827: INFO: update-demo-kitten-m6ln8 is verified up and running [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:28:58.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4260" for this suite. Aug 21 19:29:22.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:29:22.948: INFO: namespace kubectl-4260 deletion completed in 24.117495706s • [SLOW TEST:53.181 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:29:22.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Aug 21 19:29:22.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7829' Aug 21 19:29:23.240: INFO: stderr: "" Aug 21 19:29:23.240: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Aug 21 19:29:23.240: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7829' Aug 21 19:29:23.351: INFO: stderr: "" Aug 21 19:29:23.351: INFO: stdout: "update-demo-nautilus-7gw2m update-demo-nautilus-t8cgt " Aug 21 19:29:23.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gw2m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7829' Aug 21 19:29:23.445: INFO: stderr: "" Aug 21 19:29:23.445: INFO: stdout: "" Aug 21 19:29:23.445: INFO: update-demo-nautilus-7gw2m is created but not running Aug 21 19:29:28.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7829' Aug 21 19:29:28.550: INFO: stderr: "" Aug 21 19:29:28.550: INFO: stdout: "update-demo-nautilus-7gw2m update-demo-nautilus-t8cgt " Aug 21 19:29:28.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gw2m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7829' Aug 21 19:29:28.647: INFO: stderr: "" Aug 21 19:29:28.647: INFO: stdout: "true" Aug 21 19:29:28.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7gw2m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7829' Aug 21 19:29:28.746: INFO: stderr: "" Aug 21 19:29:28.746: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 19:29:28.746: INFO: validating pod update-demo-nautilus-7gw2m Aug 21 19:29:28.750: INFO: got data: { "image": "nautilus.jpg" } Aug 21 19:29:28.750: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 19:29:28.750: INFO: update-demo-nautilus-7gw2m is verified up and running Aug 21 19:29:28.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8cgt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7829' Aug 21 19:29:28.837: INFO: stderr: "" Aug 21 19:29:28.837: INFO: stdout: "true" Aug 21 19:29:28.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t8cgt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7829' Aug 21 19:29:28.926: INFO: stderr: "" Aug 21 19:29:28.926: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Aug 21 19:29:28.927: INFO: validating pod update-demo-nautilus-t8cgt Aug 21 19:29:28.929: INFO: got data: { "image": "nautilus.jpg" } Aug 21 19:29:28.929: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Aug 21 19:29:28.929: INFO: update-demo-nautilus-t8cgt is verified up and running STEP: using delete to clean up resources Aug 21 19:29:28.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7829' Aug 21 19:29:29.019: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 21 19:29:29.019: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Aug 21 19:29:29.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7829' Aug 21 19:29:29.118: INFO: stderr: "No resources found.\n" Aug 21 19:29:29.118: INFO: stdout: "" Aug 21 19:29:29.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7829 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 21 19:29:29.220: INFO: stderr: "" Aug 21 19:29:29.220: INFO: stdout: "update-demo-nautilus-7gw2m\nupdate-demo-nautilus-t8cgt\n" Aug 21 19:29:29.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7829' Aug 21 19:29:29.862: INFO: stderr: "No resources found.\n" Aug 21 19:29:29.862: INFO: stdout: "" Aug 21 19:29:29.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7829 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 21 19:29:29.964: INFO: stderr: "" Aug 21 19:29:29.964: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:29:29.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7829" for this suite. Aug 21 19:29:51.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:29:52.058: INFO: namespace kubectl-7829 deletion completed in 22.091725519s • [SLOW TEST:29.110 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:29:52.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Aug 21 19:29:56.156: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-3b67ff99-57cf-4d1d-8842-9253b7cfaf4f,GenerateName:,Namespace:events-7440,SelfLink:/api/v1/namespaces/events-7440/pods/send-events-3b67ff99-57cf-4d1d-8842-9253b7cfaf4f,UID:9b902f23-9fa9-4997-a600-4c42b5aaa8c9,ResourceVersion:1622366,Generation:0,CreationTimestamp:2020-08-21 19:29:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 106589990,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vlhcw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vlhcw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-vlhcw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0005fe990} {node.kubernetes.io/unreachable Exists NoExecute 0xc0005fe9c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:29:52 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:29:55 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:29:55 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:29:52 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.2,StartTime:2020-08-21 19:29:52 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-08-21 19:29:54 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://7ef2c5d3e7c37c2afe971849e098a054082e0b705b68133e469c972c63048b3c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Aug 21 19:29:58.207: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Aug 21 19:30:00.212: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:30:00.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7440" for this suite. Aug 21 19:30:46.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:30:46.357: INFO: namespace events-7440 deletion completed in 46.131581951s • [SLOW TEST:54.299 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:30:46.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-bdcc8dcf-7a0e-4b0a-89ab-68a8e5a0d076 STEP: Creating configMap with name cm-test-opt-upd-70d56a74-a849-4e5a-a1bc-725391c333c7 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-bdcc8dcf-7a0e-4b0a-89ab-68a8e5a0d076 STEP: Updating configmap cm-test-opt-upd-70d56a74-a849-4e5a-a1bc-725391c333c7 STEP: Creating configMap with name cm-test-opt-create-d19edda5-20ca-4214-80c8-6837131e0f83 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:30:56.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2609" for this suite. Aug 21 19:31:18.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:31:18.665: INFO: namespace projected-2609 deletion completed in 22.096511085s • [SLOW TEST:32.308 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:31:18.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 21 19:31:18.774: INFO: Waiting up to 5m0s for pod "downwardapi-volume-571b0a6f-2019-48ea-b189-d892da7be732" in namespace "downward-api-201" to be "success or failure" Aug 21 19:31:18.790: INFO: Pod "downwardapi-volume-571b0a6f-2019-48ea-b189-d892da7be732": Phase="Pending", Reason="", readiness=false. Elapsed: 15.657646ms Aug 21 19:31:21.228: INFO: Pod "downwardapi-volume-571b0a6f-2019-48ea-b189-d892da7be732": Phase="Pending", Reason="", readiness=false. Elapsed: 2.45449191s Aug 21 19:31:23.653: INFO: Pod "downwardapi-volume-571b0a6f-2019-48ea-b189-d892da7be732": Phase="Pending", Reason="", readiness=false. Elapsed: 4.879601194s Aug 21 19:31:25.712: INFO: Pod "downwardapi-volume-571b0a6f-2019-48ea-b189-d892da7be732": Phase="Pending", Reason="", readiness=false. Elapsed: 6.937796113s Aug 21 19:31:28.221: INFO: Pod "downwardapi-volume-571b0a6f-2019-48ea-b189-d892da7be732": Phase="Running", Reason="", readiness=true. Elapsed: 9.447140705s Aug 21 19:31:30.223: INFO: Pod "downwardapi-volume-571b0a6f-2019-48ea-b189-d892da7be732": Phase="Succeeded", Reason="", readiness=false. Elapsed: 11.44960709s STEP: Saw pod success Aug 21 19:31:30.223: INFO: Pod "downwardapi-volume-571b0a6f-2019-48ea-b189-d892da7be732" satisfied condition "success or failure" Aug 21 19:31:30.225: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-571b0a6f-2019-48ea-b189-d892da7be732 container client-container: STEP: delete the pod Aug 21 19:31:30.258: INFO: Waiting for pod downwardapi-volume-571b0a6f-2019-48ea-b189-d892da7be732 to disappear Aug 21 19:31:30.292: INFO: Pod downwardapi-volume-571b0a6f-2019-48ea-b189-d892da7be732 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:31:30.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-201" for this suite. Aug 21 19:31:36.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:31:36.392: INFO: namespace downward-api-201 deletion completed in 6.097519925s • [SLOW TEST:17.726 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:31:36.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Aug 21 19:31:42.994: INFO: Successfully updated pod "pod-update-a73a9f82-414f-4a61-8332-4693028bb8f3" STEP: verifying the updated pod is in kubernetes Aug 21 19:31:43.000: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:31:43.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6514" for this suite. Aug 21 19:32:05.015: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:32:05.086: INFO: namespace pods-6514 deletion completed in 22.081982382s • [SLOW TEST:28.694 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:32:05.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4053.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4053.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4053.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4053.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4053.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4053.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 21 19:32:11.440: INFO: DNS probes using dns-4053/dns-test-3945d25a-6510-4b0f-a09e-8b52ec55081c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:32:11.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4053" for this suite. Aug 21 19:32:17.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:32:17.843: INFO: namespace dns-4053 deletion completed in 6.18508411s • [SLOW TEST:12.757 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:32:17.844: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 19:32:17.904: INFO: Creating deployment "nginx-deployment" Aug 21 19:32:17.928: INFO: Waiting for observed generation 1 Aug 21 19:32:19.994: INFO: Waiting for all required pods to come up Aug 21 19:32:19.998: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Aug 21 19:32:30.006: INFO: Waiting for deployment "nginx-deployment" to complete Aug 21 19:32:30.010: INFO: Updating deployment "nginx-deployment" with a non-existent image Aug 21 19:32:30.015: INFO: Updating deployment nginx-deployment Aug 21 19:32:30.015: INFO: Waiting for observed generation 2 Aug 21 19:32:32.026: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Aug 21 19:32:32.028: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Aug 21 19:32:32.030: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Aug 21 19:32:32.037: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Aug 21 19:32:32.037: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Aug 21 19:32:32.039: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Aug 21 19:32:32.042: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Aug 21 19:32:32.042: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Aug 21 19:32:32.048: INFO: Updating deployment nginx-deployment Aug 21 19:32:32.048: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Aug 21 19:32:32.095: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Aug 21 19:32:32.156: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 21 19:32:32.570: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-1992,SelfLink:/apis/apps/v1/namespaces/deployment-1992/deployments/nginx-deployment,UID:7634d657-4b56-4888-9f00-63c648018649,ResourceVersion:1623000,Generation:3,CreationTimestamp:2020-08-21 19:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-08-21 19:32:30 +0000 UTC 2020-08-21 19:32:17 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-08-21 19:32:32 +0000 UTC 2020-08-21 19:32:32 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Aug 21 19:32:32.643: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-1992,SelfLink:/apis/apps/v1/namespaces/deployment-1992/replicasets/nginx-deployment-55fb7cb77f,UID:21cd5b35-26ce-4b1c-abec-0245972aff10,ResourceVersion:1623046,Generation:3,CreationTimestamp:2020-08-21 19:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7634d657-4b56-4888-9f00-63c648018649 0xc0022353d7 0xc0022353d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 21 19:32:32.643: INFO: All old ReplicaSets of Deployment "nginx-deployment": Aug 21 19:32:32.643: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-1992,SelfLink:/apis/apps/v1/namespaces/deployment-1992/replicasets/nginx-deployment-7b8c6f4498,UID:60628640-9ed7-40a9-9cb9-f84a3c5a684f,ResourceVersion:1623042,Generation:3,CreationTimestamp:2020-08-21 19:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 7634d657-4b56-4888-9f00-63c648018649 0xc0022354a7 0xc0022354a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Aug 21 19:32:32.680: INFO: Pod "nginx-deployment-55fb7cb77f-599jq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-599jq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-55fb7cb77f-599jq,UID:16e4a411-3b71-41ba-bbf2-6048ae5bb089,ResourceVersion:1623053,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 21cd5b35-26ce-4b1c-abec-0245972aff10 0xc002d542e7 0xc002d542e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d54360} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d543a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-21 19:32:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.681: INFO: Pod "nginx-deployment-55fb7cb77f-6lnjr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-6lnjr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-55fb7cb77f-6lnjr,UID:bb2f2dd5-87e9-41a2-bbb6-375590aaf70e,ResourceVersion:1622977,Generation:0,CreationTimestamp:2020-08-21 19:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 21cd5b35-26ce-4b1c-abec-0245972aff10 0xc002d54510 0xc002d54511}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d54600} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d54640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-21 19:32:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.681: INFO: Pod "nginx-deployment-55fb7cb77f-7zm9q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7zm9q,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-55fb7cb77f-7zm9q,UID:37205dd8-cb80-4af9-a9e0-b2764f088ea8,ResourceVersion:1623032,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 21cd5b35-26ce-4b1c-abec-0245972aff10 0xc002d54760 0xc002d54761}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d547e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d54850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.681: INFO: Pod "nginx-deployment-55fb7cb77f-8bl6z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8bl6z,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-55fb7cb77f-8bl6z,UID:199a4a76-a868-4a7f-9355-453d51de1c46,ResourceVersion:1623036,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 21cd5b35-26ce-4b1c-abec-0245972aff10 0xc002d54947 0xc002d54948}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d54ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d54ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.681: INFO: Pod "nginx-deployment-55fb7cb77f-8pldz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8pldz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-55fb7cb77f-8pldz,UID:24f48e7d-6c90-4b9a-a1a7-85818250d773,ResourceVersion:1622981,Generation:0,CreationTimestamp:2020-08-21 19:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 21cd5b35-26ce-4b1c-abec-0245972aff10 0xc002d54b87 0xc002d54b88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d54c00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d54c20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-21 19:32:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.681: INFO: Pod "nginx-deployment-55fb7cb77f-d4pg9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-d4pg9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-55fb7cb77f-d4pg9,UID:f6082ff2-0d8a-4a2d-96aa-9bd916e87c0e,ResourceVersion:1623035,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 21cd5b35-26ce-4b1c-abec-0245972aff10 0xc002d54cf0 0xc002d54cf1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d54ef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d54f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.681: INFO: Pod "nginx-deployment-55fb7cb77f-dkd2x" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dkd2x,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-55fb7cb77f-dkd2x,UID:9c045f20-9b84-44f4-89c3-ed93b6e3839e,ResourceVersion:1622965,Generation:0,CreationTimestamp:2020-08-21 19:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 21cd5b35-26ce-4b1c-abec-0245972aff10 0xc002d55027 0xc002d55028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d550e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d55140}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-21 19:32:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.681: INFO: Pod "nginx-deployment-55fb7cb77f-ftn6f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ftn6f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-55fb7cb77f-ftn6f,UID:e075f6dd-4b02-4ed6-80b8-798065af3468,ResourceVersion:1622964,Generation:0,CreationTimestamp:2020-08-21 19:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 21cd5b35-26ce-4b1c-abec-0245972aff10 0xc002d55260 0xc002d55261}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d55380} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d55410}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-21 19:32:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.682: INFO: Pod "nginx-deployment-55fb7cb77f-gjgvp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-gjgvp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-55fb7cb77f-gjgvp,UID:10b7eb48-5b16-4c77-bb19-52089ce7917d,ResourceVersion:1623040,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 21cd5b35-26ce-4b1c-abec-0245972aff10 0xc002d55530 0xc002d55531}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d555b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d555d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.682: INFO: Pod "nginx-deployment-55fb7cb77f-q5fnk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q5fnk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-55fb7cb77f-q5fnk,UID:79ed95b4-8897-41af-8187-e7dcdab81c64,ResourceVersion:1623008,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 21cd5b35-26ce-4b1c-abec-0245972aff10 0xc002d55657 0xc002d55658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d556d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d55770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.682: INFO: Pod "nginx-deployment-55fb7cb77f-svd67" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-svd67,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-55fb7cb77f-svd67,UID:8ffa6fb3-ecbb-4d6c-a92a-e65409f1a71c,ResourceVersion:1623020,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 21cd5b35-26ce-4b1c-abec-0245972aff10 0xc002d55817 0xc002d55818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d559c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d559e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.682: INFO: Pod "nginx-deployment-55fb7cb77f-td948" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-td948,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-55fb7cb77f-td948,UID:b9c53015-9b4a-4474-96a1-cd2a127de436,ResourceVersion:1622983,Generation:0,CreationTimestamp:2020-08-21 19:32:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 21cd5b35-26ce-4b1c-abec-0245972aff10 0xc002d55a67 0xc002d55a68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d55ae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d55b10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:30 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:,StartTime:2020-08-21 19:32:30 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.682: INFO: Pod "nginx-deployment-55fb7cb77f-xzm2m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xzm2m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-55fb7cb77f-xzm2m,UID:ff2a80d7-d1b7-4a87-b192-2591cc5981b8,ResourceVersion:1623044,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 21cd5b35-26ce-4b1c-abec-0245972aff10 0xc002d55be0 0xc002d55be1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d55c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d55c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.682: INFO: Pod "nginx-deployment-7b8c6f4498-476r2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-476r2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-476r2,UID:00a9887a-3353-4909-a0f8-6a764dc4e4d7,ResourceVersion:1622915,Generation:0,CreationTimestamp:2020-08-21 19:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d55d07 0xc002d55d08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d55d80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d55da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.184,StartTime:2020-08-21 19:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-21 19:32:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://a323ea9562910779621b4c86fd07740fcb5d955628edeb6e1af83fa2300729f1}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.682: INFO: Pod "nginx-deployment-7b8c6f4498-4bk7q" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4bk7q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-4bk7q,UID:d1e99db4-817c-4d38-b779-3d48d24490a8,ResourceVersion:1623034,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d55e77 0xc002d55e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d55ef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d55f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.682: INFO: Pod "nginx-deployment-7b8c6f4498-8vv9d" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8vv9d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-8vv9d,UID:83600554-a536-4c86-aa49-0c94caf5fc96,ResourceVersion:1622895,Generation:0,CreationTimestamp:2020-08-21 19:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d55f97 0xc002d55f98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d24010} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d24030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:18 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.181,StartTime:2020-08-21 19:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-21 19:32:26 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9ef7b91970adfa38009a60f5fbd51db6e567c701eba5133584ecc22103cb94a7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.683: INFO: Pod "nginx-deployment-7b8c6f4498-95775" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-95775,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-95775,UID:8a04cacf-d4cf-47ae-97f0-c6016ca10c82,ResourceVersion:1623024,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d24107 0xc002d24108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d24180} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d241a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.683: INFO: Pod "nginx-deployment-7b8c6f4498-9sg8d" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9sg8d,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-9sg8d,UID:fa60adfa-be78-4e0d-99eb-df510d9f228f,ResourceVersion:1623038,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d24227 0xc002d24228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d242a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d242c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.683: INFO: Pod "nginx-deployment-7b8c6f4498-bww8r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bww8r,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-bww8r,UID:5e89b96e-08d0-46ae-b704-d4e54b98f33d,ResourceVersion:1623030,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d24347 0xc002d24348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d243c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d243e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.683: INFO: Pod "nginx-deployment-7b8c6f4498-cv6bb" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cv6bb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-cv6bb,UID:b036b04e-a346-46fa-9f6c-006531cc7ab4,ResourceVersion:1622900,Generation:0,CreationTimestamp:2020-08-21 19:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d24467 0xc002d24468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d244e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d24500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:18 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.8,StartTime:2020-08-21 19:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-21 19:32:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c5cb8527c9193ebd0a348fc22a936172d5f8f58a728ecaffca544d7923c922fa}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.683: INFO: Pod "nginx-deployment-7b8c6f4498-db4st" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-db4st,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-db4st,UID:5d44e0a8-1119-4b12-a7e5-8c6ceaead9d7,ResourceVersion:1623025,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d245d7 0xc002d245d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d24650} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d24670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.683: INFO: Pod "nginx-deployment-7b8c6f4498-gslqt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gslqt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-gslqt,UID:529fbaf6-2397-4d01-b08f-e9a8ab7a5e3a,ResourceVersion:1623023,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d246f7 0xc002d246f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d24770} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d24790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.683: INFO: Pod "nginx-deployment-7b8c6f4498-j9ndl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j9ndl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-j9ndl,UID:5d911a3e-5614-4390-bd32-49d1d35e0839,ResourceVersion:1623045,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d24827 0xc002d24828}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d248a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d248c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-21 19:32:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.683: INFO: Pod "nginx-deployment-7b8c6f4498-jll6t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jll6t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-jll6t,UID:33494695-165f-47b5-9bec-eb7a688f12b1,ResourceVersion:1623039,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d24987 0xc002d24988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d24a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d24a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.684: INFO: Pod "nginx-deployment-7b8c6f4498-jq7dv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jq7dv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-jq7dv,UID:88377f42-c8f4-4d50-9e4c-b99a3a008228,ResourceVersion:1622906,Generation:0,CreationTimestamp:2020-08-21 19:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d24ac7 0xc002d24ac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d24b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d24b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:18 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.9,StartTime:2020-08-21 19:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-21 19:32:27 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://65901d431e09d302c6b2625c8b0ea9c86d6a9bf5f8fcce98d162dd684b7d092d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.684: INFO: Pod "nginx-deployment-7b8c6f4498-jqwhr" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jqwhr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-jqwhr,UID:32ddcc8f-0943-45a0-907b-8785a123062a,ResourceVersion:1622921,Generation:0,CreationTimestamp:2020-08-21 19:32:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d24c37 0xc002d24c38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d24cb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d24cd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:18 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.183,StartTime:2020-08-21 19:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-21 19:32:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c3c08fb37e0651a5dc915bb4775d45bd3cb6d0a34f190ffd1acc792db83153a5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.684: INFO: Pod "nginx-deployment-7b8c6f4498-kddh8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kddh8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-kddh8,UID:387ca666-47d5-4907-afee-f81963b1fa53,ResourceVersion:1623054,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d24da7 0xc002d24da8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d24e20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d24e40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-08-21 19:32:32 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.684: INFO: Pod "nginx-deployment-7b8c6f4498-l449s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l449s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-l449s,UID:afc4cdac-c038-4b45-9346-30459d5030b1,ResourceVersion:1623026,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d24f07 0xc002d24f08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d24f80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d24fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.684: INFO: Pod "nginx-deployment-7b8c6f4498-r8cbl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r8cbl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-r8cbl,UID:37455dc5-1a31-43dd-afec-248524d61f0f,ResourceVersion:1623037,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d25027 0xc002d25028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d250a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d250c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.684: INFO: Pod "nginx-deployment-7b8c6f4498-rlj2b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rlj2b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-rlj2b,UID:8e699c0e-6a63-4d23-8c06-93332c0b2147,ResourceVersion:1622871,Generation:0,CreationTimestamp:2020-08-21 19:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d25147 0xc002d25148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d251c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d251e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:17 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:23 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:23 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.180,StartTime:2020-08-21 19:32:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-21 19:32:22 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://b65bc3bc13460b44d391065d9c794a6d7435003f918c64f09232cb32501368b0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.685: INFO: Pod "nginx-deployment-7b8c6f4498-rr8m4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rr8m4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-rr8m4,UID:a7861361-bd2a-41e4-b1dc-214d74f73053,ResourceVersion:1622918,Generation:0,CreationTimestamp:2020-08-21 19:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d252b7 0xc002d252b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d25330} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d25350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:18 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.182,StartTime:2020-08-21 19:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-21 19:32:28 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://33c8a3183e379abf0c934a8790500b39af3b9eb411321045cca89dd3541944b8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.685: INFO: Pod "nginx-deployment-7b8c6f4498-xsckg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xsckg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-xsckg,UID:706a9401-de94-473d-84ac-fb23d0954da5,ResourceVersion:1623009,Generation:0,CreationTimestamp:2020-08-21 19:32:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d25427 0xc002d25428}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d254a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d254c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:32 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:32:32.685: INFO: Pod "nginx-deployment-7b8c6f4498-zmm9b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zmm9b,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-1992,SelfLink:/api/v1/namespaces/deployment-1992/pods/nginx-deployment-7b8c6f4498-zmm9b,UID:58814844-3c04-4f41-bdce-58462f9f0cb4,ResourceVersion:1622879,Generation:0,CreationTimestamp:2020-08-21 19:32:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 60628640-9ed7-40a9-9cb9-f84a3c5a684f 0xc002d25547 0xc002d25548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6j5t8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6j5t8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-6j5t8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d255c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d255e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:32:17 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.7,StartTime:2020-08-21 19:32:18 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-21 19:32:24 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c2121c5d5b1a3e5a6325b06f6457241368b0fa9cc743451e6a334aff7fbfe97d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:32:32.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1992" for this suite. Aug 21 19:32:55.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:32:55.871: INFO: namespace deployment-1992 deletion completed in 23.156278174s • [SLOW TEST:38.027 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:32:55.871: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-846 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-846 STEP: Creating statefulset with conflicting port in namespace statefulset-846 STEP: Waiting until pod test-pod will start running in namespace statefulset-846 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-846 Aug 21 19:33:02.252: INFO: Observed stateful pod in namespace: statefulset-846, name: ss-0, uid: 21695fc3-e993-4833-8d93-83f214007588, status phase: Pending. Waiting for statefulset controller to delete. Aug 21 19:33:02.388: INFO: Observed stateful pod in namespace: statefulset-846, name: ss-0, uid: 21695fc3-e993-4833-8d93-83f214007588, status phase: Failed. Waiting for statefulset controller to delete. Aug 21 19:33:02.399: INFO: Observed stateful pod in namespace: statefulset-846, name: ss-0, uid: 21695fc3-e993-4833-8d93-83f214007588, status phase: Failed. Waiting for statefulset controller to delete. Aug 21 19:33:02.438: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-846 STEP: Removing pod with conflicting port in namespace statefulset-846 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-846 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Aug 21 19:33:06.527: INFO: Deleting all statefulset in ns statefulset-846 Aug 21 19:33:06.530: INFO: Scaling statefulset ss to 0 Aug 21 19:33:16.563: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 19:33:16.566: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:33:16.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-846" for this suite. Aug 21 19:33:22.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:33:22.923: INFO: namespace statefulset-846 deletion completed in 6.144690206s • [SLOW TEST:27.052 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:33:22.924: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-1898e8fb-ff35-41bf-8d6d-17ee449bc369 in namespace container-probe-5125 Aug 21 19:33:26.986: INFO: Started pod liveness-1898e8fb-ff35-41bf-8d6d-17ee449bc369 in namespace container-probe-5125 STEP: checking the pod's current state and verifying that restartCount is present Aug 21 19:33:26.989: INFO: Initial restart count of pod liveness-1898e8fb-ff35-41bf-8d6d-17ee449bc369 is 0 Aug 21 19:33:43.031: INFO: Restart count of pod container-probe-5125/liveness-1898e8fb-ff35-41bf-8d6d-17ee449bc369 is now 1 (16.041235683s elapsed) Aug 21 19:34:03.155: INFO: Restart count of pod container-probe-5125/liveness-1898e8fb-ff35-41bf-8d6d-17ee449bc369 is now 2 (36.165326665s elapsed) Aug 21 19:34:23.226: INFO: Restart count of pod container-probe-5125/liveness-1898e8fb-ff35-41bf-8d6d-17ee449bc369 is now 3 (56.236879363s elapsed) Aug 21 19:34:43.328: INFO: Restart count of pod container-probe-5125/liveness-1898e8fb-ff35-41bf-8d6d-17ee449bc369 is now 4 (1m16.338080822s elapsed) Aug 21 19:35:53.609: INFO: Restart count of pod container-probe-5125/liveness-1898e8fb-ff35-41bf-8d6d-17ee449bc369 is now 5 (2m26.619697921s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:35:53.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5125" for this suite. Aug 21 19:35:59.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:35:59.743: INFO: namespace container-probe-5125 deletion completed in 6.101806311s • [SLOW TEST:156.820 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:35:59.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-f755d375-c30d-49d0-a62d-4893507362f3 Aug 21 19:35:59.820: INFO: Pod name my-hostname-basic-f755d375-c30d-49d0-a62d-4893507362f3: Found 0 pods out of 1 Aug 21 19:36:04.825: INFO: Pod name my-hostname-basic-f755d375-c30d-49d0-a62d-4893507362f3: Found 1 pods out of 1 Aug 21 19:36:04.825: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f755d375-c30d-49d0-a62d-4893507362f3" are running Aug 21 19:36:04.828: INFO: Pod "my-hostname-basic-f755d375-c30d-49d0-a62d-4893507362f3-fgd75" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 19:35:59 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 19:36:02 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 19:36:02 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-08-21 19:35:59 +0000 UTC Reason: Message:}]) Aug 21 19:36:04.828: INFO: Trying to dial the pod Aug 21 19:36:09.839: INFO: Controller my-hostname-basic-f755d375-c30d-49d0-a62d-4893507362f3: Got expected result from replica 1 [my-hostname-basic-f755d375-c30d-49d0-a62d-4893507362f3-fgd75]: "my-hostname-basic-f755d375-c30d-49d0-a62d-4893507362f3-fgd75", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:36:09.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4222" for this suite. Aug 21 19:36:15.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:36:15.935: INFO: namespace replication-controller-4222 deletion completed in 6.092036347s • [SLOW TEST:16.191 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:36:15.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 21 19:36:16.065: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3e98af8b-6666-4dc7-8ab6-802b141a940e" in namespace "downward-api-5883" to be "success or failure" Aug 21 19:36:16.097: INFO: Pod "downwardapi-volume-3e98af8b-6666-4dc7-8ab6-802b141a940e": Phase="Pending", Reason="", readiness=false. Elapsed: 31.292448ms Aug 21 19:36:18.101: INFO: Pod "downwardapi-volume-3e98af8b-6666-4dc7-8ab6-802b141a940e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035118759s Aug 21 19:36:20.104: INFO: Pod "downwardapi-volume-3e98af8b-6666-4dc7-8ab6-802b141a940e": Phase="Running", Reason="", readiness=true. Elapsed: 4.039082382s Aug 21 19:36:22.109: INFO: Pod "downwardapi-volume-3e98af8b-6666-4dc7-8ab6-802b141a940e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.04353006s STEP: Saw pod success Aug 21 19:36:22.109: INFO: Pod "downwardapi-volume-3e98af8b-6666-4dc7-8ab6-802b141a940e" satisfied condition "success or failure" Aug 21 19:36:22.112: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3e98af8b-6666-4dc7-8ab6-802b141a940e container client-container: STEP: delete the pod Aug 21 19:36:22.143: INFO: Waiting for pod downwardapi-volume-3e98af8b-6666-4dc7-8ab6-802b141a940e to disappear Aug 21 19:36:22.146: INFO: Pod downwardapi-volume-3e98af8b-6666-4dc7-8ab6-802b141a940e no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:36:22.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5883" for this suite. Aug 21 19:36:28.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:36:28.315: INFO: namespace downward-api-5883 deletion completed in 6.163485633s • [SLOW TEST:12.380 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:36:28.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 19:36:28.417: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Aug 21 19:36:28.423: INFO: Number of nodes with available pods: 0 Aug 21 19:36:28.423: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Aug 21 19:36:28.614: INFO: Number of nodes with available pods: 0 Aug 21 19:36:28.614: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:29.728: INFO: Number of nodes with available pods: 0 Aug 21 19:36:29.728: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:30.617: INFO: Number of nodes with available pods: 0 Aug 21 19:36:30.617: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:31.618: INFO: Number of nodes with available pods: 0 Aug 21 19:36:31.618: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:32.618: INFO: Number of nodes with available pods: 1 Aug 21 19:36:32.618: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Aug 21 19:36:32.647: INFO: Number of nodes with available pods: 1 Aug 21 19:36:32.647: INFO: Number of running nodes: 0, number of available pods: 1 Aug 21 19:36:33.652: INFO: Number of nodes with available pods: 0 Aug 21 19:36:33.652: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Aug 21 19:36:33.665: INFO: Number of nodes with available pods: 0 Aug 21 19:36:33.665: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:34.669: INFO: Number of nodes with available pods: 0 Aug 21 19:36:34.669: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:35.669: INFO: Number of nodes with available pods: 0 Aug 21 19:36:35.669: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:36.704: INFO: Number of nodes with available pods: 0 Aug 21 19:36:36.704: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:37.670: INFO: Number of nodes with available pods: 0 Aug 21 19:36:37.670: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:38.669: INFO: Number of nodes with available pods: 0 Aug 21 19:36:38.669: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:39.669: INFO: Number of nodes with available pods: 0 Aug 21 19:36:39.669: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:40.670: INFO: Number of nodes with available pods: 0 Aug 21 19:36:40.670: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:41.669: INFO: Number of nodes with available pods: 0 Aug 21 19:36:41.669: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:42.670: INFO: Number of nodes with available pods: 0 Aug 21 19:36:42.670: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:43.670: INFO: Number of nodes with available pods: 0 Aug 21 19:36:43.670: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:44.722: INFO: Number of nodes with available pods: 0 Aug 21 19:36:44.722: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:45.668: INFO: Number of nodes with available pods: 0 Aug 21 19:36:45.668: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:46.669: INFO: Number of nodes with available pods: 0 Aug 21 19:36:46.669: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:36:47.670: INFO: Number of nodes with available pods: 1 Aug 21 19:36:47.670: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1455, will wait for the garbage collector to delete the pods Aug 21 19:36:47.734: INFO: Deleting DaemonSet.extensions daemon-set took: 6.045582ms Aug 21 19:36:48.035: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.331126ms Aug 21 19:36:53.438: INFO: Number of nodes with available pods: 0 Aug 21 19:36:53.439: INFO: Number of running nodes: 0, number of available pods: 0 Aug 21 19:36:53.441: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1455/daemonsets","resourceVersion":"1624070"},"items":null} Aug 21 19:36:53.444: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1455/pods","resourceVersion":"1624070"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:36:53.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1455" for this suite. Aug 21 19:37:01.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:37:01.623: INFO: namespace daemonsets-1455 deletion completed in 8.104811536s • [SLOW TEST:33.308 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:37:01.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 21 19:37:01.736: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23862c21-e38f-43c2-b7d6-e7ea0c12cac9" in namespace "downward-api-43" to be "success or failure" Aug 21 19:37:01.741: INFO: Pod "downwardapi-volume-23862c21-e38f-43c2-b7d6-e7ea0c12cac9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.22973ms Aug 21 19:37:03.745: INFO: Pod "downwardapi-volume-23862c21-e38f-43c2-b7d6-e7ea0c12cac9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009040371s Aug 21 19:37:05.749: INFO: Pod "downwardapi-volume-23862c21-e38f-43c2-b7d6-e7ea0c12cac9": Phase="Running", Reason="", readiness=true. Elapsed: 4.01337035s Aug 21 19:37:07.753: INFO: Pod "downwardapi-volume-23862c21-e38f-43c2-b7d6-e7ea0c12cac9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017606661s STEP: Saw pod success Aug 21 19:37:07.753: INFO: Pod "downwardapi-volume-23862c21-e38f-43c2-b7d6-e7ea0c12cac9" satisfied condition "success or failure" Aug 21 19:37:07.757: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-23862c21-e38f-43c2-b7d6-e7ea0c12cac9 container client-container: STEP: delete the pod Aug 21 19:37:07.776: INFO: Waiting for pod downwardapi-volume-23862c21-e38f-43c2-b7d6-e7ea0c12cac9 to disappear Aug 21 19:37:07.780: INFO: Pod downwardapi-volume-23862c21-e38f-43c2-b7d6-e7ea0c12cac9 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:37:07.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-43" for this suite. Aug 21 19:37:13.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:37:13.864: INFO: namespace downward-api-43 deletion completed in 6.080612295s • [SLOW TEST:12.241 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:37:13.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 19:37:13.953: INFO: Pod name rollover-pod: Found 0 pods out of 1 Aug 21 19:37:18.958: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 21 19:37:18.958: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Aug 21 19:37:20.962: INFO: Creating deployment "test-rollover-deployment" Aug 21 19:37:20.972: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Aug 21 19:37:22.978: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Aug 21 19:37:22.985: INFO: Ensure that both replica sets have 1 created replica Aug 21 19:37:22.991: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Aug 21 19:37:22.998: INFO: Updating deployment test-rollover-deployment Aug 21 19:37:22.998: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Aug 21 19:37:25.008: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Aug 21 19:37:25.015: INFO: Make sure deployment "test-rollover-deployment" is complete Aug 21 19:37:25.021: INFO: all replica sets need to contain the pod-template-hash label Aug 21 19:37:25.021: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635441, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635441, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635443, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635440, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 19:37:27.029: INFO: all replica sets need to contain the pod-template-hash label Aug 21 19:37:27.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635441, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635441, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635445, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635440, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 19:37:29.030: INFO: all replica sets need to contain the pod-template-hash label Aug 21 19:37:29.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635441, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635441, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635445, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635440, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 19:37:31.030: INFO: all replica sets need to contain the pod-template-hash label Aug 21 19:37:31.030: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635441, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635441, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635445, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635440, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 19:37:33.029: INFO: all replica sets need to contain the pod-template-hash label Aug 21 19:37:33.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635441, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635441, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635445, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635440, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 19:37:35.029: INFO: all replica sets need to contain the pod-template-hash label Aug 21 19:37:35.029: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635441, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635441, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635445, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733635440, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Aug 21 19:37:37.030: INFO: Aug 21 19:37:37.030: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 21 19:37:37.039: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3740,SelfLink:/apis/apps/v1/namespaces/deployment-3740/deployments/test-rollover-deployment,UID:f9b08049-02d9-44af-adcd-05d84b750328,ResourceVersion:1624274,Generation:2,CreationTimestamp:2020-08-21 19:37:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-08-21 19:37:21 +0000 UTC 2020-08-21 19:37:21 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-08-21 19:37:36 +0000 UTC 2020-08-21 19:37:20 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Aug 21 19:37:37.043: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3740,SelfLink:/apis/apps/v1/namespaces/deployment-3740/replicasets/test-rollover-deployment-854595fc44,UID:eb9d322f-e87f-41ba-b6b2-85be3fa1d0df,ResourceVersion:1624263,Generation:2,CreationTimestamp:2020-08-21 19:37:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f9b08049-02d9-44af-adcd-05d84b750328 0xc001dc3127 0xc001dc3128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 21 19:37:37.043: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Aug 21 19:37:37.043: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3740,SelfLink:/apis/apps/v1/namespaces/deployment-3740/replicasets/test-rollover-controller,UID:3737a541-22b3-4b6b-955a-e35060e0f2d6,ResourceVersion:1624272,Generation:2,CreationTimestamp:2020-08-21 19:37:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f9b08049-02d9-44af-adcd-05d84b750328 0xc001dc3057 0xc001dc3058}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 21 19:37:37.044: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3740,SelfLink:/apis/apps/v1/namespaces/deployment-3740/replicasets/test-rollover-deployment-9b8b997cf,UID:e8d502c6-e89d-4ad8-b394-1b90846076fa,ResourceVersion:1624225,Generation:2,CreationTimestamp:2020-08-21 19:37:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment f9b08049-02d9-44af-adcd-05d84b750328 0xc001dc3200 0xc001dc3201}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 21 19:37:37.047: INFO: Pod "test-rollover-deployment-854595fc44-qkwhz" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-qkwhz,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3740,SelfLink:/api/v1/namespaces/deployment-3740/pods/test-rollover-deployment-854595fc44-qkwhz,UID:9c46cbc2-92e7-4965-af9f-367e5e491e38,ResourceVersion:1624240,Generation:0,CreationTimestamp:2020-08-21 19:37:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 eb9d322f-e87f-41ba-b6b2-85be3fa1d0df 0xc000ab5147 0xc000ab5148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-p5996 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-p5996,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-p5996 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000ab51c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000ab51e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:37:23 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:37:25 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:37:25 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:37:23 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.202,StartTime:2020-08-21 19:37:23 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-08-21 19:37:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://64f9ff4dedc6d2fb8e04377046455151f1311f513ebf73831551078851471d99}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:37:37.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3740" for this suite. Aug 21 19:37:43.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:37:43.524: INFO: namespace deployment-3740 deletion completed in 6.473479458s • [SLOW TEST:29.659 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:37:43.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-3aa58460-20f5-4ade-abe4-b280366008f7 STEP: Creating a pod to test consume configMaps Aug 21 19:37:43.621: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d24dc2d4-9d9c-4bc7-9f6c-71508d2fb297" in namespace "projected-1435" to be "success or failure" Aug 21 19:37:43.654: INFO: Pod "pod-projected-configmaps-d24dc2d4-9d9c-4bc7-9f6c-71508d2fb297": Phase="Pending", Reason="", readiness=false. Elapsed: 33.367729ms Aug 21 19:37:45.659: INFO: Pod "pod-projected-configmaps-d24dc2d4-9d9c-4bc7-9f6c-71508d2fb297": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038048584s Aug 21 19:37:47.664: INFO: Pod "pod-projected-configmaps-d24dc2d4-9d9c-4bc7-9f6c-71508d2fb297": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043005747s STEP: Saw pod success Aug 21 19:37:47.664: INFO: Pod "pod-projected-configmaps-d24dc2d4-9d9c-4bc7-9f6c-71508d2fb297" satisfied condition "success or failure" Aug 21 19:37:47.666: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-d24dc2d4-9d9c-4bc7-9f6c-71508d2fb297 container projected-configmap-volume-test: STEP: delete the pod Aug 21 19:37:47.741: INFO: Waiting for pod pod-projected-configmaps-d24dc2d4-9d9c-4bc7-9f6c-71508d2fb297 to disappear Aug 21 19:37:47.744: INFO: Pod pod-projected-configmaps-d24dc2d4-9d9c-4bc7-9f6c-71508d2fb297 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:37:47.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1435" for this suite. Aug 21 19:37:53.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:37:53.867: INFO: namespace projected-1435 deletion completed in 6.119724895s • [SLOW TEST:10.344 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:37:53.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 21 19:37:53.976: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2332a2d2-8cb6-4110-a354-1e2f9729a453" in namespace "projected-1564" to be "success or failure" Aug 21 19:37:53.980: INFO: Pod "downwardapi-volume-2332a2d2-8cb6-4110-a354-1e2f9729a453": Phase="Pending", Reason="", readiness=false. Elapsed: 4.402345ms Aug 21 19:37:55.985: INFO: Pod "downwardapi-volume-2332a2d2-8cb6-4110-a354-1e2f9729a453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008861245s Aug 21 19:37:57.989: INFO: Pod "downwardapi-volume-2332a2d2-8cb6-4110-a354-1e2f9729a453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013516992s STEP: Saw pod success Aug 21 19:37:57.990: INFO: Pod "downwardapi-volume-2332a2d2-8cb6-4110-a354-1e2f9729a453" satisfied condition "success or failure" Aug 21 19:37:57.992: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-2332a2d2-8cb6-4110-a354-1e2f9729a453 container client-container: STEP: delete the pod Aug 21 19:37:58.019: INFO: Waiting for pod downwardapi-volume-2332a2d2-8cb6-4110-a354-1e2f9729a453 to disappear Aug 21 19:37:58.022: INFO: Pod downwardapi-volume-2332a2d2-8cb6-4110-a354-1e2f9729a453 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:37:58.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1564" for this suite. Aug 21 19:38:04.038: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:38:04.106: INFO: namespace projected-1564 deletion completed in 6.080862346s • [SLOW TEST:10.238 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:38:04.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:38:08.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9914" for this suite. Aug 21 19:38:14.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:38:14.447: INFO: namespace emptydir-wrapper-9914 deletion completed in 6.113363864s • [SLOW TEST:10.340 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:38:14.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 21 19:38:14.496: INFO: Waiting up to 5m0s for pod "downward-api-41e758ad-936c-4cef-86b4-b206d7bb2f29" in namespace "downward-api-5200" to be "success or failure" Aug 21 19:38:14.543: INFO: Pod "downward-api-41e758ad-936c-4cef-86b4-b206d7bb2f29": Phase="Pending", Reason="", readiness=false. Elapsed: 47.756004ms Aug 21 19:38:16.547: INFO: Pod "downward-api-41e758ad-936c-4cef-86b4-b206d7bb2f29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051265758s Aug 21 19:38:19.057: INFO: Pod "downward-api-41e758ad-936c-4cef-86b4-b206d7bb2f29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.561916153s Aug 21 19:38:21.062: INFO: Pod "downward-api-41e758ad-936c-4cef-86b4-b206d7bb2f29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.566410594s STEP: Saw pod success Aug 21 19:38:21.062: INFO: Pod "downward-api-41e758ad-936c-4cef-86b4-b206d7bb2f29" satisfied condition "success or failure" Aug 21 19:38:21.065: INFO: Trying to get logs from node iruya-worker2 pod downward-api-41e758ad-936c-4cef-86b4-b206d7bb2f29 container dapi-container: STEP: delete the pod Aug 21 19:38:21.139: INFO: Waiting for pod downward-api-41e758ad-936c-4cef-86b4-b206d7bb2f29 to disappear Aug 21 19:38:21.159: INFO: Pod downward-api-41e758ad-936c-4cef-86b4-b206d7bb2f29 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:38:21.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5200" for this suite. Aug 21 19:38:27.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:38:27.260: INFO: namespace downward-api-5200 deletion completed in 6.097892399s • [SLOW TEST:12.813 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:38:27.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Aug 21 19:38:27.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-8705 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Aug 21 19:38:33.232: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0821 19:38:33.129383 1671 log.go:172] (0xc00091a0b0) (0xc000b221e0) Create stream\nI0821 19:38:33.129421 1671 log.go:172] (0xc00091a0b0) (0xc000b221e0) Stream added, broadcasting: 1\nI0821 19:38:33.133380 1671 log.go:172] (0xc00091a0b0) Reply frame received for 1\nI0821 19:38:33.133436 1671 log.go:172] (0xc00091a0b0) (0xc00057da40) Create stream\nI0821 19:38:33.133465 1671 log.go:172] (0xc00091a0b0) (0xc00057da40) Stream added, broadcasting: 3\nI0821 19:38:33.134205 1671 log.go:172] (0xc00091a0b0) Reply frame received for 3\nI0821 19:38:33.134249 1671 log.go:172] (0xc00091a0b0) (0xc000b22000) Create stream\nI0821 19:38:33.134274 1671 log.go:172] (0xc00091a0b0) (0xc000b22000) Stream added, broadcasting: 5\nI0821 19:38:33.134943 1671 log.go:172] (0xc00091a0b0) Reply frame received for 5\nI0821 19:38:33.134974 1671 log.go:172] (0xc00091a0b0) (0xc00069e1e0) Create stream\nI0821 19:38:33.134983 1671 log.go:172] (0xc00091a0b0) (0xc00069e1e0) Stream added, broadcasting: 7\nI0821 19:38:33.135617 1671 log.go:172] (0xc00091a0b0) Reply frame received for 7\nI0821 19:38:33.135750 1671 log.go:172] (0xc00057da40) (3) Writing data frame\nI0821 19:38:33.135849 1671 log.go:172] (0xc00057da40) (3) Writing data frame\nI0821 19:38:33.136661 1671 log.go:172] (0xc00091a0b0) Data frame received for 5\nI0821 19:38:33.136688 1671 log.go:172] (0xc000b22000) (5) Data frame handling\nI0821 19:38:33.136715 1671 log.go:172] (0xc000b22000) (5) Data frame sent\nI0821 19:38:33.137431 1671 log.go:172] (0xc00091a0b0) Data frame received for 5\nI0821 19:38:33.137448 1671 log.go:172] (0xc000b22000) (5) Data frame handling\nI0821 19:38:33.137469 1671 log.go:172] (0xc000b22000) (5) Data frame sent\nI0821 19:38:33.178997 1671 log.go:172] (0xc00091a0b0) Data frame received for 7\nI0821 19:38:33.179041 1671 log.go:172] (0xc00069e1e0) (7) Data frame handling\nI0821 19:38:33.179085 1671 log.go:172] (0xc00091a0b0) Data frame received for 5\nI0821 19:38:33.179141 1671 log.go:172] (0xc000b22000) (5) Data frame handling\nI0821 19:38:33.179463 1671 log.go:172] (0xc00091a0b0) Data frame received for 1\nI0821 19:38:33.179482 1671 log.go:172] (0xc000b221e0) (1) Data frame handling\nI0821 19:38:33.179521 1671 log.go:172] (0xc000b221e0) (1) Data frame sent\nI0821 19:38:33.179685 1671 log.go:172] (0xc00091a0b0) (0xc000b221e0) Stream removed, broadcasting: 1\nI0821 19:38:33.179775 1671 log.go:172] (0xc00091a0b0) (0xc00057da40) Stream removed, broadcasting: 3\nI0821 19:38:33.179809 1671 log.go:172] (0xc00091a0b0) (0xc000b221e0) Stream removed, broadcasting: 1\nI0821 19:38:33.179817 1671 log.go:172] (0xc00091a0b0) (0xc00057da40) Stream removed, broadcasting: 3\nI0821 19:38:33.179823 1671 log.go:172] (0xc00091a0b0) (0xc000b22000) Stream removed, broadcasting: 5\nI0821 19:38:33.179831 1671 log.go:172] (0xc00091a0b0) (0xc00069e1e0) Stream removed, broadcasting: 7\nI0821 19:38:33.179904 1671 log.go:172] (0xc00091a0b0) Go away received\n" Aug 21 19:38:33.232: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:38:35.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8705" for this suite. Aug 21 19:38:45.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:38:45.388: INFO: namespace kubectl-8705 deletion completed in 10.144874257s • [SLOW TEST:18.128 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:38:45.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Aug 21 19:38:45.492: INFO: Waiting up to 5m0s for pod "downward-api-41674422-e8a0-47ec-80e5-db2ba3801fed" in namespace "downward-api-7533" to be "success or failure" Aug 21 19:38:45.508: INFO: Pod "downward-api-41674422-e8a0-47ec-80e5-db2ba3801fed": Phase="Pending", Reason="", readiness=false. Elapsed: 15.876192ms Aug 21 19:38:47.576: INFO: Pod "downward-api-41674422-e8a0-47ec-80e5-db2ba3801fed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083950687s Aug 21 19:38:49.580: INFO: Pod "downward-api-41674422-e8a0-47ec-80e5-db2ba3801fed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087975618s STEP: Saw pod success Aug 21 19:38:49.580: INFO: Pod "downward-api-41674422-e8a0-47ec-80e5-db2ba3801fed" satisfied condition "success or failure" Aug 21 19:38:49.583: INFO: Trying to get logs from node iruya-worker2 pod downward-api-41674422-e8a0-47ec-80e5-db2ba3801fed container dapi-container: STEP: delete the pod Aug 21 19:38:49.604: INFO: Waiting for pod downward-api-41674422-e8a0-47ec-80e5-db2ba3801fed to disappear Aug 21 19:38:49.608: INFO: Pod downward-api-41674422-e8a0-47ec-80e5-db2ba3801fed no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:38:49.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7533" for this suite. Aug 21 19:38:55.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:38:55.699: INFO: namespace downward-api-7533 deletion completed in 6.085866485s • [SLOW TEST:10.310 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:38:55.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-582cc026-1573-4372-9474-decac4d89bb8 in namespace container-probe-3342 Aug 21 19:38:59.792: INFO: Started pod busybox-582cc026-1573-4372-9474-decac4d89bb8 in namespace container-probe-3342 STEP: checking the pod's current state and verifying that restartCount is present Aug 21 19:38:59.795: INFO: Initial restart count of pod busybox-582cc026-1573-4372-9474-decac4d89bb8 is 0 Aug 21 19:39:55.918: INFO: Restart count of pod container-probe-3342/busybox-582cc026-1573-4372-9474-decac4d89bb8 is now 1 (56.123466598s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:39:55.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3342" for this suite. Aug 21 19:40:01.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:40:02.047: INFO: namespace container-probe-3342 deletion completed in 6.088334837s • [SLOW TEST:66.348 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:40:02.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-97a5941c-1dac-45a5-a21a-a7f479d97e69 in namespace container-probe-9713 Aug 21 19:40:06.234: INFO: Started pod liveness-97a5941c-1dac-45a5-a21a-a7f479d97e69 in namespace container-probe-9713 STEP: checking the pod's current state and verifying that restartCount is present Aug 21 19:40:06.236: INFO: Initial restart count of pod liveness-97a5941c-1dac-45a5-a21a-a7f479d97e69 is 0 Aug 21 19:40:30.299: INFO: Restart count of pod container-probe-9713/liveness-97a5941c-1dac-45a5-a21a-a7f479d97e69 is now 1 (24.062845952s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:40:30.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9713" for this suite. Aug 21 19:40:36.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:40:36.412: INFO: namespace container-probe-9713 deletion completed in 6.093068657s • [SLOW TEST:34.365 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:40:36.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 21 19:40:36.494: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6003' Aug 21 19:40:36.611: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 21 19:40:36.611: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: rolling-update to same image controller Aug 21 19:40:36.651: INFO: scanned /root for discovery docs: Aug 21 19:40:36.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-6003' Aug 21 19:40:53.522: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Aug 21 19:40:53.522: INFO: stdout: "Created e2e-test-nginx-rc-9ac58d4a34c47b1f5cb9e017c1022095\nScaling up e2e-test-nginx-rc-9ac58d4a34c47b1f5cb9e017c1022095 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-9ac58d4a34c47b1f5cb9e017c1022095 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-9ac58d4a34c47b1f5cb9e017c1022095 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Aug 21 19:40:53.522: INFO: stdout: "Created e2e-test-nginx-rc-9ac58d4a34c47b1f5cb9e017c1022095\nScaling up e2e-test-nginx-rc-9ac58d4a34c47b1f5cb9e017c1022095 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-9ac58d4a34c47b1f5cb9e017c1022095 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-9ac58d4a34c47b1f5cb9e017c1022095 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Aug 21 19:40:53.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-6003' Aug 21 19:40:53.624: INFO: stderr: "" Aug 21 19:40:53.624: INFO: stdout: "e2e-test-nginx-rc-9ac58d4a34c47b1f5cb9e017c1022095-r9t85 " Aug 21 19:40:53.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-9ac58d4a34c47b1f5cb9e017c1022095-r9t85 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6003' Aug 21 19:40:53.717: INFO: stderr: "" Aug 21 19:40:53.717: INFO: stdout: "true" Aug 21 19:40:53.717: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-9ac58d4a34c47b1f5cb9e017c1022095-r9t85 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6003' Aug 21 19:40:53.804: INFO: stderr: "" Aug 21 19:40:53.804: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Aug 21 19:40:53.804: INFO: e2e-test-nginx-rc-9ac58d4a34c47b1f5cb9e017c1022095-r9t85 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Aug 21 19:40:53.805: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6003' Aug 21 19:40:53.912: INFO: stderr: "" Aug 21 19:40:53.912: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:40:53.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6003" for this suite. Aug 21 19:41:15.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:41:16.063: INFO: namespace kubectl-6003 deletion completed in 22.116228477s • [SLOW TEST:39.651 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:41:16.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Aug 21 19:41:22.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-976c465c-804a-4127-b15c-968700843634 -c busybox-main-container --namespace=emptydir-9092 -- cat /usr/share/volumeshare/shareddata.txt' Aug 21 19:41:22.480: INFO: stderr: "I0821 19:41:22.397084 1833 log.go:172] (0xc0009da580) (0xc0005d6d20) Create stream\nI0821 19:41:22.397133 1833 log.go:172] (0xc0009da580) (0xc0005d6d20) Stream added, broadcasting: 1\nI0821 19:41:22.399990 1833 log.go:172] (0xc0009da580) Reply frame received for 1\nI0821 19:41:22.400062 1833 log.go:172] (0xc0009da580) (0xc0005d63c0) Create stream\nI0821 19:41:22.400092 1833 log.go:172] (0xc0009da580) (0xc0005d63c0) Stream added, broadcasting: 3\nI0821 19:41:22.401170 1833 log.go:172] (0xc0009da580) Reply frame received for 3\nI0821 19:41:22.401212 1833 log.go:172] (0xc0009da580) (0xc000570000) Create stream\nI0821 19:41:22.401225 1833 log.go:172] (0xc0009da580) (0xc000570000) Stream added, broadcasting: 5\nI0821 19:41:22.402036 1833 log.go:172] (0xc0009da580) Reply frame received for 5\nI0821 19:41:22.470654 1833 log.go:172] (0xc0009da580) Data frame received for 3\nI0821 19:41:22.470705 1833 log.go:172] (0xc0005d63c0) (3) Data frame handling\nI0821 19:41:22.470721 1833 log.go:172] (0xc0005d63c0) (3) Data frame sent\nI0821 19:41:22.470734 1833 log.go:172] (0xc0009da580) Data frame received for 3\nI0821 19:41:22.470744 1833 log.go:172] (0xc0005d63c0) (3) Data frame handling\nI0821 19:41:22.470776 1833 log.go:172] (0xc0009da580) Data frame received for 5\nI0821 19:41:22.470786 1833 log.go:172] (0xc000570000) (5) Data frame handling\nI0821 19:41:22.473504 1833 log.go:172] (0xc0009da580) Data frame received for 1\nI0821 19:41:22.473523 1833 log.go:172] (0xc0005d6d20) (1) Data frame handling\nI0821 19:41:22.473532 1833 log.go:172] (0xc0005d6d20) (1) Data frame sent\nI0821 19:41:22.473547 1833 log.go:172] (0xc0009da580) (0xc0005d6d20) Stream removed, broadcasting: 1\nI0821 19:41:22.473588 1833 log.go:172] (0xc0009da580) Go away received\nI0821 19:41:22.473856 1833 log.go:172] (0xc0009da580) (0xc0005d6d20) Stream removed, broadcasting: 1\nI0821 19:41:22.473875 1833 log.go:172] (0xc0009da580) (0xc0005d63c0) Stream removed, broadcasting: 3\nI0821 19:41:22.473887 1833 log.go:172] (0xc0009da580) (0xc000570000) Stream removed, broadcasting: 5\n" Aug 21 19:41:22.480: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:41:22.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9092" for this suite. Aug 21 19:41:28.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:41:28.604: INFO: namespace emptydir-9092 deletion completed in 6.119738491s • [SLOW TEST:12.540 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:41:28.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Aug 21 19:41:38.750: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 19:41:38.756: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 19:41:40.756: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 19:41:40.760: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 19:41:42.756: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 19:41:42.760: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 19:41:44.756: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 19:41:44.761: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 19:41:46.756: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 19:41:46.761: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 19:41:48.756: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 19:41:48.774: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 19:41:50.756: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 19:41:50.760: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 19:41:52.756: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 19:41:52.761: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 19:41:54.756: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 19:41:54.846: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 19:41:56.756: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 19:41:56.791: INFO: Pod pod-with-prestop-exec-hook still exists Aug 21 19:41:58.756: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Aug 21 19:41:58.780: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:41:58.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2742" for this suite. Aug 21 19:42:20.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:42:20.917: INFO: namespace container-lifecycle-hook-2742 deletion completed in 22.126805909s • [SLOW TEST:52.313 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:42:20.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 21 19:42:21.022: INFO: Waiting up to 5m0s for pod "downwardapi-volume-44cc4c4e-e0ee-47e5-9da2-63d05567cd2e" in namespace "projected-464" to be "success or failure" Aug 21 19:42:21.058: INFO: Pod "downwardapi-volume-44cc4c4e-e0ee-47e5-9da2-63d05567cd2e": Phase="Pending", Reason="", readiness=false. Elapsed: 36.135748ms Aug 21 19:42:23.104: INFO: Pod "downwardapi-volume-44cc4c4e-e0ee-47e5-9da2-63d05567cd2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082065268s Aug 21 19:42:25.108: INFO: Pod "downwardapi-volume-44cc4c4e-e0ee-47e5-9da2-63d05567cd2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086104597s STEP: Saw pod success Aug 21 19:42:25.108: INFO: Pod "downwardapi-volume-44cc4c4e-e0ee-47e5-9da2-63d05567cd2e" satisfied condition "success or failure" Aug 21 19:42:25.111: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-44cc4c4e-e0ee-47e5-9da2-63d05567cd2e container client-container: STEP: delete the pod Aug 21 19:42:25.172: INFO: Waiting for pod downwardapi-volume-44cc4c4e-e0ee-47e5-9da2-63d05567cd2e to disappear Aug 21 19:42:25.188: INFO: Pod downwardapi-volume-44cc4c4e-e0ee-47e5-9da2-63d05567cd2e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:42:25.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-464" for this suite. Aug 21 19:42:31.203: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:42:31.300: INFO: namespace projected-464 deletion completed in 6.108465611s • [SLOW TEST:10.382 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:42:31.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-7e24f832-e152-4a3d-a81c-5e4d4baf8ca4 STEP: Creating a pod to test consume secrets Aug 21 19:42:31.388: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-602bd5b4-9b04-499e-bc55-f6063283f1fe" in namespace "projected-4088" to be "success or failure" Aug 21 19:42:31.398: INFO: Pod "pod-projected-secrets-602bd5b4-9b04-499e-bc55-f6063283f1fe": Phase="Pending", Reason="", readiness=false. Elapsed: 9.97325ms Aug 21 19:42:33.402: INFO: Pod "pod-projected-secrets-602bd5b4-9b04-499e-bc55-f6063283f1fe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014253924s Aug 21 19:42:35.406: INFO: Pod "pod-projected-secrets-602bd5b4-9b04-499e-bc55-f6063283f1fe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018458094s STEP: Saw pod success Aug 21 19:42:35.406: INFO: Pod "pod-projected-secrets-602bd5b4-9b04-499e-bc55-f6063283f1fe" satisfied condition "success or failure" Aug 21 19:42:35.409: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-602bd5b4-9b04-499e-bc55-f6063283f1fe container projected-secret-volume-test: STEP: delete the pod Aug 21 19:42:35.429: INFO: Waiting for pod pod-projected-secrets-602bd5b4-9b04-499e-bc55-f6063283f1fe to disappear Aug 21 19:42:35.434: INFO: Pod pod-projected-secrets-602bd5b4-9b04-499e-bc55-f6063283f1fe no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:42:35.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4088" for this suite. Aug 21 19:42:41.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:42:41.547: INFO: namespace projected-4088 deletion completed in 6.109206519s • [SLOW TEST:10.246 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:42:41.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-689faa62-296c-4d87-a56d-f7104e7a58c7 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-689faa62-296c-4d87-a56d-f7104e7a58c7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:42:49.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9859" for this suite. Aug 21 19:43:11.713: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:43:11.784: INFO: namespace configmap-9859 deletion completed in 22.087681672s • [SLOW TEST:30.237 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:43:11.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Aug 21 19:43:11.825: INFO: Waiting up to 5m0s for pod "client-containers-0e25db93-cfe4-47d5-a359-0fc3b0fe4a05" in namespace "containers-1670" to be "success or failure" Aug 21 19:43:11.841: INFO: Pod "client-containers-0e25db93-cfe4-47d5-a359-0fc3b0fe4a05": Phase="Pending", Reason="", readiness=false. Elapsed: 15.928317ms Aug 21 19:43:13.845: INFO: Pod "client-containers-0e25db93-cfe4-47d5-a359-0fc3b0fe4a05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02023395s Aug 21 19:43:15.849: INFO: Pod "client-containers-0e25db93-cfe4-47d5-a359-0fc3b0fe4a05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023765857s STEP: Saw pod success Aug 21 19:43:15.849: INFO: Pod "client-containers-0e25db93-cfe4-47d5-a359-0fc3b0fe4a05" satisfied condition "success or failure" Aug 21 19:43:15.851: INFO: Trying to get logs from node iruya-worker pod client-containers-0e25db93-cfe4-47d5-a359-0fc3b0fe4a05 container test-container: STEP: delete the pod Aug 21 19:43:15.898: INFO: Waiting for pod client-containers-0e25db93-cfe4-47d5-a359-0fc3b0fe4a05 to disappear Aug 21 19:43:15.907: INFO: Pod client-containers-0e25db93-cfe4-47d5-a359-0fc3b0fe4a05 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:43:15.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1670" for this suite. Aug 21 19:43:21.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:43:22.018: INFO: namespace containers-1670 deletion completed in 6.106807593s • [SLOW TEST:10.234 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:43:22.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 21 19:43:22.061: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2997' Aug 21 19:43:22.174: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 21 19:43:22.174: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Aug 21 19:43:22.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2997' Aug 21 19:43:22.328: INFO: stderr: "" Aug 21 19:43:22.328: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:43:22.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2997" for this suite. Aug 21 19:43:28.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:43:28.423: INFO: namespace kubectl-2997 deletion completed in 6.091984592s • [SLOW TEST:6.404 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:43:28.423: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Aug 21 19:43:28.503: INFO: Waiting up to 5m0s for pod "var-expansion-f1ec46bd-4778-4148-8832-66d9cd13c3af" in namespace "var-expansion-9329" to be "success or failure" Aug 21 19:43:28.507: INFO: Pod "var-expansion-f1ec46bd-4778-4148-8832-66d9cd13c3af": Phase="Pending", Reason="", readiness=false. Elapsed: 3.54606ms Aug 21 19:43:30.510: INFO: Pod "var-expansion-f1ec46bd-4778-4148-8832-66d9cd13c3af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00743202s Aug 21 19:43:32.515: INFO: Pod "var-expansion-f1ec46bd-4778-4148-8832-66d9cd13c3af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01189837s STEP: Saw pod success Aug 21 19:43:32.515: INFO: Pod "var-expansion-f1ec46bd-4778-4148-8832-66d9cd13c3af" satisfied condition "success or failure" Aug 21 19:43:32.518: INFO: Trying to get logs from node iruya-worker pod var-expansion-f1ec46bd-4778-4148-8832-66d9cd13c3af container dapi-container: STEP: delete the pod Aug 21 19:43:32.538: INFO: Waiting for pod var-expansion-f1ec46bd-4778-4148-8832-66d9cd13c3af to disappear Aug 21 19:43:32.548: INFO: Pod var-expansion-f1ec46bd-4778-4148-8832-66d9cd13c3af no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:43:32.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9329" for this suite. Aug 21 19:43:38.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:43:38.643: INFO: namespace var-expansion-9329 deletion completed in 6.087971333s • [SLOW TEST:10.220 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:43:38.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-05332d5c-3f74-4479-ab6d-39a4bf78c879 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:43:38.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9023" for this suite. Aug 21 19:43:44.762: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:43:44.841: INFO: namespace secrets-9023 deletion completed in 6.159949471s • [SLOW TEST:6.198 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:43:44.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 21 19:43:44.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-6158' Aug 21 19:43:45.004: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Aug 21 19:43:45.004: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Aug 21 19:43:45.045: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-t44hk] Aug 21 19:43:45.045: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-t44hk" in namespace "kubectl-6158" to be "running and ready" Aug 21 19:43:45.048: INFO: Pod "e2e-test-nginx-rc-t44hk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.489652ms Aug 21 19:43:47.051: INFO: Pod "e2e-test-nginx-rc-t44hk": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005867463s Aug 21 19:43:49.056: INFO: Pod "e2e-test-nginx-rc-t44hk": Phase="Running", Reason="", readiness=true. Elapsed: 4.010193461s Aug 21 19:43:49.056: INFO: Pod "e2e-test-nginx-rc-t44hk" satisfied condition "running and ready" Aug 21 19:43:49.056: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-t44hk] Aug 21 19:43:49.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-6158' Aug 21 19:43:49.187: INFO: stderr: "" Aug 21 19:43:49.187: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Aug 21 19:43:49.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-6158' Aug 21 19:43:49.285: INFO: stderr: "" Aug 21 19:43:49.285: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:43:49.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6158" for this suite. Aug 21 19:43:55.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:43:55.393: INFO: namespace kubectl-6158 deletion completed in 6.104405711s • [SLOW TEST:10.551 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:43:55.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Aug 21 19:43:56.129: INFO: Pod name wrapped-volume-race-9f44f4b4-4b53-465c-846b-ea67b257ebd7: Found 0 pods out of 5 Aug 21 19:44:01.137: INFO: Pod name wrapped-volume-race-9f44f4b4-4b53-465c-846b-ea67b257ebd7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9f44f4b4-4b53-465c-846b-ea67b257ebd7 in namespace emptydir-wrapper-1652, will wait for the garbage collector to delete the pods Aug 21 19:44:15.333: INFO: Deleting ReplicationController wrapped-volume-race-9f44f4b4-4b53-465c-846b-ea67b257ebd7 took: 7.22994ms Aug 21 19:44:15.633: INFO: Terminating ReplicationController wrapped-volume-race-9f44f4b4-4b53-465c-846b-ea67b257ebd7 pods took: 300.297488ms STEP: Creating RC which spawns configmap-volume pods Aug 21 19:44:53.355: INFO: Pod name wrapped-volume-race-a3f50242-b507-46f7-a835-202c9d3024b0: Found 0 pods out of 5 Aug 21 19:44:58.362: INFO: Pod name wrapped-volume-race-a3f50242-b507-46f7-a835-202c9d3024b0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-a3f50242-b507-46f7-a835-202c9d3024b0 in namespace emptydir-wrapper-1652, will wait for the garbage collector to delete the pods Aug 21 19:45:14.472: INFO: Deleting ReplicationController wrapped-volume-race-a3f50242-b507-46f7-a835-202c9d3024b0 took: 6.75339ms Aug 21 19:45:14.772: INFO: Terminating ReplicationController wrapped-volume-race-a3f50242-b507-46f7-a835-202c9d3024b0 pods took: 300.394037ms STEP: Creating RC which spawns configmap-volume pods Aug 21 19:45:54.421: INFO: Pod name wrapped-volume-race-de8a4a07-f67f-4b70-bd86-32ebd63be3b9: Found 0 pods out of 5 Aug 21 19:45:59.429: INFO: Pod name wrapped-volume-race-de8a4a07-f67f-4b70-bd86-32ebd63be3b9: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-de8a4a07-f67f-4b70-bd86-32ebd63be3b9 in namespace emptydir-wrapper-1652, will wait for the garbage collector to delete the pods Aug 21 19:46:15.540: INFO: Deleting ReplicationController wrapped-volume-race-de8a4a07-f67f-4b70-bd86-32ebd63be3b9 took: 7.743622ms Aug 21 19:46:15.840: INFO: Terminating ReplicationController wrapped-volume-race-de8a4a07-f67f-4b70-bd86-32ebd63be3b9 pods took: 300.337265ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:46:54.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1652" for this suite. Aug 21 19:47:02.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:47:02.451: INFO: namespace emptydir-wrapper-1652 deletion completed in 8.108751605s • [SLOW TEST:187.057 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:47:02.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 21 19:47:06.526: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:47:06.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8967" for this suite. Aug 21 19:47:12.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:47:12.725: INFO: namespace container-runtime-8967 deletion completed in 6.137754682s • [SLOW TEST:10.274 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:47:12.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Aug 21 19:47:12.787: INFO: Waiting up to 5m0s for pod "pod-a203be0a-1f96-40dc-a7f7-dfd4ec1f571a" in namespace "emptydir-7680" to be "success or failure" Aug 21 19:47:12.808: INFO: Pod "pod-a203be0a-1f96-40dc-a7f7-dfd4ec1f571a": Phase="Pending", Reason="", readiness=false. Elapsed: 21.414783ms Aug 21 19:47:14.815: INFO: Pod "pod-a203be0a-1f96-40dc-a7f7-dfd4ec1f571a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027988518s Aug 21 19:47:16.817: INFO: Pod "pod-a203be0a-1f96-40dc-a7f7-dfd4ec1f571a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030774931s STEP: Saw pod success Aug 21 19:47:16.817: INFO: Pod "pod-a203be0a-1f96-40dc-a7f7-dfd4ec1f571a" satisfied condition "success or failure" Aug 21 19:47:16.819: INFO: Trying to get logs from node iruya-worker2 pod pod-a203be0a-1f96-40dc-a7f7-dfd4ec1f571a container test-container: STEP: delete the pod Aug 21 19:47:16.840: INFO: Waiting for pod pod-a203be0a-1f96-40dc-a7f7-dfd4ec1f571a to disappear Aug 21 19:47:16.852: INFO: Pod pod-a203be0a-1f96-40dc-a7f7-dfd4ec1f571a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:47:16.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7680" for this suite. Aug 21 19:47:22.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:47:22.988: INFO: namespace emptydir-7680 deletion completed in 6.13313725s • [SLOW TEST:10.263 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:47:22.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 21 19:47:23.086: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a3f2bce5-2a67-4964-a12d-4bb53051d2bd" in namespace "projected-6477" to be "success or failure" Aug 21 19:47:23.095: INFO: Pod "downwardapi-volume-a3f2bce5-2a67-4964-a12d-4bb53051d2bd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.898475ms Aug 21 19:47:25.101: INFO: Pod "downwardapi-volume-a3f2bce5-2a67-4964-a12d-4bb53051d2bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01534366s Aug 21 19:47:27.119: INFO: Pod "downwardapi-volume-a3f2bce5-2a67-4964-a12d-4bb53051d2bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033178262s STEP: Saw pod success Aug 21 19:47:27.119: INFO: Pod "downwardapi-volume-a3f2bce5-2a67-4964-a12d-4bb53051d2bd" satisfied condition "success or failure" Aug 21 19:47:27.122: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a3f2bce5-2a67-4964-a12d-4bb53051d2bd container client-container: STEP: delete the pod Aug 21 19:47:27.138: INFO: Waiting for pod downwardapi-volume-a3f2bce5-2a67-4964-a12d-4bb53051d2bd to disappear Aug 21 19:47:27.161: INFO: Pod downwardapi-volume-a3f2bce5-2a67-4964-a12d-4bb53051d2bd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:47:27.161: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6477" for this suite. Aug 21 19:47:33.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:47:33.280: INFO: namespace projected-6477 deletion completed in 6.115695331s • [SLOW TEST:10.291 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:47:33.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Aug 21 19:47:37.386: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:47:37.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6161" for this suite. Aug 21 19:47:43.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:47:43.527: INFO: namespace container-runtime-6161 deletion completed in 6.093854287s • [SLOW TEST:10.247 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:47:43.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Aug 21 19:47:43.609: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5989,SelfLink:/api/v1/namespaces/watch-5989/configmaps/e2e-watch-test-label-changed,UID:43bf4bf5-65ae-408f-8bfb-94f21376f475,ResourceVersion:1626979,Generation:0,CreationTimestamp:2020-08-21 19:47:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Aug 21 19:47:43.610: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5989,SelfLink:/api/v1/namespaces/watch-5989/configmaps/e2e-watch-test-label-changed,UID:43bf4bf5-65ae-408f-8bfb-94f21376f475,ResourceVersion:1626980,Generation:0,CreationTimestamp:2020-08-21 19:47:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Aug 21 19:47:43.610: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5989,SelfLink:/api/v1/namespaces/watch-5989/configmaps/e2e-watch-test-label-changed,UID:43bf4bf5-65ae-408f-8bfb-94f21376f475,ResourceVersion:1626981,Generation:0,CreationTimestamp:2020-08-21 19:47:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Aug 21 19:47:53.638: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5989,SelfLink:/api/v1/namespaces/watch-5989/configmaps/e2e-watch-test-label-changed,UID:43bf4bf5-65ae-408f-8bfb-94f21376f475,ResourceVersion:1627002,Generation:0,CreationTimestamp:2020-08-21 19:47:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Aug 21 19:47:53.638: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5989,SelfLink:/api/v1/namespaces/watch-5989/configmaps/e2e-watch-test-label-changed,UID:43bf4bf5-65ae-408f-8bfb-94f21376f475,ResourceVersion:1627003,Generation:0,CreationTimestamp:2020-08-21 19:47:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Aug 21 19:47:53.638: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-5989,SelfLink:/api/v1/namespaces/watch-5989/configmaps/e2e-watch-test-label-changed,UID:43bf4bf5-65ae-408f-8bfb-94f21376f475,ResourceVersion:1627004,Generation:0,CreationTimestamp:2020-08-21 19:47:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:47:53.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5989" for this suite. Aug 21 19:47:59.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:47:59.728: INFO: namespace watch-5989 deletion completed in 6.084837416s • [SLOW TEST:16.200 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:47:59.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 19:47:59.791: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Aug 21 19:48:04.796: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Aug 21 19:48:04.796: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Aug 21 19:48:04.833: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-3921,SelfLink:/apis/apps/v1/namespaces/deployment-3921/deployments/test-cleanup-deployment,UID:c0eca802-cf78-41b2-8eb0-19653f78c9d4,ResourceVersion:1627047,Generation:1,CreationTimestamp:2020-08-21 19:48:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Aug 21 19:48:04.848: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-3921,SelfLink:/apis/apps/v1/namespaces/deployment-3921/replicasets/test-cleanup-deployment-55bbcbc84c,UID:2b059e6b-3fbe-4987-a8eb-71a31a2f93ba,ResourceVersion:1627049,Generation:1,CreationTimestamp:2020-08-21 19:48:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c0eca802-cf78-41b2-8eb0-19653f78c9d4 0xc000c70157 0xc000c70158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Aug 21 19:48:04.848: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Aug 21 19:48:04.848: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-3921,SelfLink:/apis/apps/v1/namespaces/deployment-3921/replicasets/test-cleanup-controller,UID:74728534-4adc-4a36-a78b-1a0ecb1fff24,ResourceVersion:1627048,Generation:1,CreationTimestamp:2020-08-21 19:47:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment c0eca802-cf78-41b2-8eb0-19653f78c9d4 0xc000c70087 0xc000c70088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Aug 21 19:48:04.874: INFO: Pod "test-cleanup-controller-9bzgd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-9bzgd,GenerateName:test-cleanup-controller-,Namespace:deployment-3921,SelfLink:/api/v1/namespaces/deployment-3921/pods/test-cleanup-controller-9bzgd,UID:5d6ba081-30d8-461f-aa5b-002e3a909087,ResourceVersion:1627044,Generation:0,CreationTimestamp:2020-08-21 19:47:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 74728534-4adc-4a36-a78b-1a0ecb1fff24 0xc000c70a2f 0xc000c70a40}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pt2hs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pt2hs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-pt2hs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c70ab0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c70ad0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:47:59 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:48:03 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:48:03 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:47:59 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.9,PodIP:10.244.1.234,StartTime:2020-08-21 19:47:59 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-08-21 19:48:02 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2975ae55b37953b7d8b40e3e9ee55710a7cf011a7c4ea53c827f7f0af0cb0c42}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Aug 21 19:48:04.874: INFO: Pod "test-cleanup-deployment-55bbcbc84c-42xkj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-42xkj,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-3921,SelfLink:/api/v1/namespaces/deployment-3921/pods/test-cleanup-deployment-55bbcbc84c-42xkj,UID:541940f1-e7e1-430d-8129-fc02e0cf06d5,ResourceVersion:1627055,Generation:0,CreationTimestamp:2020-08-21 19:48:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 2b059e6b-3fbe-4987-a8eb-71a31a2f93ba 0xc000c70bc7 0xc000c70bc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pt2hs {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pt2hs,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-pt2hs true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000c70c50} {node.kubernetes.io/unreachable Exists NoExecute 0xc000c70c70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 19:48:04 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:48:04.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3921" for this suite. Aug 21 19:48:10.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:48:11.055: INFO: namespace deployment-3921 deletion completed in 6.1606824s • [SLOW TEST:11.327 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:48:11.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-901 STEP: creating a selector STEP: Creating the service pods in kubernetes Aug 21 19:48:11.106: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Aug 21 19:48:33.269: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.236:8080/dial?request=hostName&protocol=udp&host=10.244.2.44&port=8081&tries=1'] Namespace:pod-network-test-901 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 19:48:33.269: INFO: >>> kubeConfig: /root/.kube/config I0821 19:48:33.305363 6 log.go:172] (0xc001a6a580) (0xc002d96280) Create stream I0821 19:48:33.305395 6 log.go:172] (0xc001a6a580) (0xc002d96280) Stream added, broadcasting: 1 I0821 19:48:33.307282 6 log.go:172] (0xc001a6a580) Reply frame received for 1 I0821 19:48:33.307342 6 log.go:172] (0xc001a6a580) (0xc002d96320) Create stream I0821 19:48:33.307358 6 log.go:172] (0xc001a6a580) (0xc002d96320) Stream added, broadcasting: 3 I0821 19:48:33.308260 6 log.go:172] (0xc001a6a580) Reply frame received for 3 I0821 19:48:33.308296 6 log.go:172] (0xc001a6a580) (0xc002d963c0) Create stream I0821 19:48:33.308308 6 log.go:172] (0xc001a6a580) (0xc002d963c0) Stream added, broadcasting: 5 I0821 19:48:33.309153 6 log.go:172] (0xc001a6a580) Reply frame received for 5 I0821 19:48:33.382994 6 log.go:172] (0xc001a6a580) Data frame received for 3 I0821 19:48:33.383024 6 log.go:172] (0xc002d96320) (3) Data frame handling I0821 19:48:33.383041 6 log.go:172] (0xc002d96320) (3) Data frame sent I0821 19:48:33.383790 6 log.go:172] (0xc001a6a580) Data frame received for 3 I0821 19:48:33.383816 6 log.go:172] (0xc002d96320) (3) Data frame handling I0821 19:48:33.383869 6 log.go:172] (0xc001a6a580) Data frame received for 5 I0821 19:48:33.383894 6 log.go:172] (0xc002d963c0) (5) Data frame handling I0821 19:48:33.385257 6 log.go:172] (0xc001a6a580) Data frame received for 1 I0821 19:48:33.385281 6 log.go:172] (0xc002d96280) (1) Data frame handling I0821 19:48:33.385299 6 log.go:172] (0xc002d96280) (1) Data frame sent I0821 19:48:33.385323 6 log.go:172] (0xc001a6a580) (0xc002d96280) Stream removed, broadcasting: 1 I0821 19:48:33.385342 6 log.go:172] (0xc001a6a580) Go away received I0821 19:48:33.385410 6 log.go:172] (0xc001a6a580) (0xc002d96280) Stream removed, broadcasting: 1 I0821 19:48:33.385426 6 log.go:172] (0xc001a6a580) (0xc002d96320) Stream removed, broadcasting: 3 I0821 19:48:33.385437 6 log.go:172] (0xc001a6a580) (0xc002d963c0) Stream removed, broadcasting: 5 Aug 21 19:48:33.385: INFO: Waiting for endpoints: map[] Aug 21 19:48:33.388: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.236:8080/dial?request=hostName&protocol=udp&host=10.244.1.235&port=8081&tries=1'] Namespace:pod-network-test-901 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 19:48:33.388: INFO: >>> kubeConfig: /root/.kube/config I0821 19:48:33.413517 6 log.go:172] (0xc000e511e0) (0xc00247b5e0) Create stream I0821 19:48:33.413557 6 log.go:172] (0xc000e511e0) (0xc00247b5e0) Stream added, broadcasting: 1 I0821 19:48:33.415878 6 log.go:172] (0xc000e511e0) Reply frame received for 1 I0821 19:48:33.415932 6 log.go:172] (0xc000e511e0) (0xc003b06b40) Create stream I0821 19:48:33.415992 6 log.go:172] (0xc000e511e0) (0xc003b06b40) Stream added, broadcasting: 3 I0821 19:48:33.417902 6 log.go:172] (0xc000e511e0) Reply frame received for 3 I0821 19:48:33.417948 6 log.go:172] (0xc000e511e0) (0xc002d96460) Create stream I0821 19:48:33.417974 6 log.go:172] (0xc000e511e0) (0xc002d96460) Stream added, broadcasting: 5 I0821 19:48:33.419137 6 log.go:172] (0xc000e511e0) Reply frame received for 5 I0821 19:48:33.483274 6 log.go:172] (0xc000e511e0) Data frame received for 3 I0821 19:48:33.483302 6 log.go:172] (0xc003b06b40) (3) Data frame handling I0821 19:48:33.483317 6 log.go:172] (0xc003b06b40) (3) Data frame sent I0821 19:48:33.484110 6 log.go:172] (0xc000e511e0) Data frame received for 3 I0821 19:48:33.484151 6 log.go:172] (0xc000e511e0) Data frame received for 5 I0821 19:48:33.484179 6 log.go:172] (0xc002d96460) (5) Data frame handling I0821 19:48:33.484200 6 log.go:172] (0xc003b06b40) (3) Data frame handling I0821 19:48:33.485997 6 log.go:172] (0xc000e511e0) Data frame received for 1 I0821 19:48:33.486014 6 log.go:172] (0xc00247b5e0) (1) Data frame handling I0821 19:48:33.486026 6 log.go:172] (0xc00247b5e0) (1) Data frame sent I0821 19:48:33.486035 6 log.go:172] (0xc000e511e0) (0xc00247b5e0) Stream removed, broadcasting: 1 I0821 19:48:33.486110 6 log.go:172] (0xc000e511e0) (0xc00247b5e0) Stream removed, broadcasting: 1 I0821 19:48:33.486120 6 log.go:172] (0xc000e511e0) (0xc003b06b40) Stream removed, broadcasting: 3 I0821 19:48:33.486158 6 log.go:172] (0xc000e511e0) Go away received I0821 19:48:33.486253 6 log.go:172] (0xc000e511e0) (0xc002d96460) Stream removed, broadcasting: 5 Aug 21 19:48:33.486: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:48:33.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-901" for this suite. Aug 21 19:48:57.521: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:48:57.591: INFO: namespace pod-network-test-901 deletion completed in 24.091044838s • [SLOW TEST:46.535 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:48:57.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 21 19:49:05.807: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 19:49:05.818: INFO: Pod pod-with-poststart-http-hook still exists Aug 21 19:49:07.818: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 19:49:07.822: INFO: Pod pod-with-poststart-http-hook still exists Aug 21 19:49:09.818: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 19:49:09.822: INFO: Pod pod-with-poststart-http-hook still exists Aug 21 19:49:11.818: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 19:49:11.829: INFO: Pod pod-with-poststart-http-hook still exists Aug 21 19:49:13.818: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Aug 21 19:49:13.822: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:49:13.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7597" for this suite. Aug 21 19:49:35.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:49:35.937: INFO: namespace container-lifecycle-hook-7597 deletion completed in 22.110817562s • [SLOW TEST:38.346 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:49:35.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 19:49:58.025: INFO: Container started at 2020-08-21 19:49:38 +0000 UTC, pod became ready at 2020-08-21 19:49:56 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:49:58.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-437" for this suite. Aug 21 19:50:18.046: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:50:18.131: INFO: namespace container-probe-437 deletion completed in 20.101297264s • [SLOW TEST:42.194 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:50:18.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Aug 21 19:50:22.220: INFO: Pod pod-hostip-440453cf-ee57-404c-8ceb-35739dc0b43f has hostIP: 172.18.0.5 [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:50:22.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6979" for this suite. Aug 21 19:50:44.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:50:44.387: INFO: namespace pods-6979 deletion completed in 22.163674953s • [SLOW TEST:26.256 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:50:44.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:50:50.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5027" for this suite. Aug 21 19:50:56.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:50:56.759: INFO: namespace namespaces-5027 deletion completed in 6.088645572s STEP: Destroying namespace "nsdeletetest-2363" for this suite. Aug 21 19:50:56.761: INFO: Namespace nsdeletetest-2363 was already deleted STEP: Destroying namespace "nsdeletetest-1284" for this suite. Aug 21 19:51:02.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:51:02.936: INFO: namespace nsdeletetest-1284 deletion completed in 6.175423532s • [SLOW TEST:18.548 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:51:02.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 21 19:51:02.995: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0b53b099-d6cd-42d3-bde0-1c3925079341" in namespace "downward-api-4691" to be "success or failure" Aug 21 19:51:02.999: INFO: Pod "downwardapi-volume-0b53b099-d6cd-42d3-bde0-1c3925079341": Phase="Pending", Reason="", readiness=false. Elapsed: 3.890279ms Aug 21 19:51:05.003: INFO: Pod "downwardapi-volume-0b53b099-d6cd-42d3-bde0-1c3925079341": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00773996s Aug 21 19:51:07.075: INFO: Pod "downwardapi-volume-0b53b099-d6cd-42d3-bde0-1c3925079341": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079896008s STEP: Saw pod success Aug 21 19:51:07.075: INFO: Pod "downwardapi-volume-0b53b099-d6cd-42d3-bde0-1c3925079341" satisfied condition "success or failure" Aug 21 19:51:07.078: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-0b53b099-d6cd-42d3-bde0-1c3925079341 container client-container: STEP: delete the pod Aug 21 19:51:07.097: INFO: Waiting for pod downwardapi-volume-0b53b099-d6cd-42d3-bde0-1c3925079341 to disappear Aug 21 19:51:07.101: INFO: Pod downwardapi-volume-0b53b099-d6cd-42d3-bde0-1c3925079341 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:51:07.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4691" for this suite. Aug 21 19:51:13.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:51:13.230: INFO: namespace downward-api-4691 deletion completed in 6.124610634s • [SLOW TEST:10.293 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:51:13.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2045ef6c-05f3-4fd6-aacd-64dd6ed00bea STEP: Creating a pod to test consume secrets Aug 21 19:51:13.374: INFO: Waiting up to 5m0s for pod "pod-secrets-b3519f7d-67a7-47d2-82da-cc0f2ad8ae10" in namespace "secrets-1452" to be "success or failure" Aug 21 19:51:13.390: INFO: Pod "pod-secrets-b3519f7d-67a7-47d2-82da-cc0f2ad8ae10": Phase="Pending", Reason="", readiness=false. Elapsed: 16.397143ms Aug 21 19:51:15.394: INFO: Pod "pod-secrets-b3519f7d-67a7-47d2-82da-cc0f2ad8ae10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020288507s Aug 21 19:51:17.398: INFO: Pod "pod-secrets-b3519f7d-67a7-47d2-82da-cc0f2ad8ae10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024627481s STEP: Saw pod success Aug 21 19:51:17.398: INFO: Pod "pod-secrets-b3519f7d-67a7-47d2-82da-cc0f2ad8ae10" satisfied condition "success or failure" Aug 21 19:51:17.402: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-b3519f7d-67a7-47d2-82da-cc0f2ad8ae10 container secret-volume-test: STEP: delete the pod Aug 21 19:51:17.417: INFO: Waiting for pod pod-secrets-b3519f7d-67a7-47d2-82da-cc0f2ad8ae10 to disappear Aug 21 19:51:17.422: INFO: Pod pod-secrets-b3519f7d-67a7-47d2-82da-cc0f2ad8ae10 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:51:17.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1452" for this suite. Aug 21 19:51:23.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:51:23.513: INFO: namespace secrets-1452 deletion completed in 6.088426784s STEP: Destroying namespace "secret-namespace-8428" for this suite. Aug 21 19:51:29.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:51:29.607: INFO: namespace secret-namespace-8428 deletion completed in 6.093761739s • [SLOW TEST:16.377 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:51:29.607: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Aug 21 19:51:29.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-2827' Aug 21 19:51:32.832: INFO: stderr: "" Aug 21 19:51:32.832: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Aug 21 19:51:37.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-2827 -o json' Aug 21 19:51:37.988: INFO: stderr: "" Aug 21 19:51:37.988: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-08-21T19:51:32Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-2827\",\n \"resourceVersion\": \"1627760\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-2827/pods/e2e-test-nginx-pod\",\n \"uid\": \"f1725ee8-b2a8-4481-956c-214e485caeae\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-nphb9\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-nphb9\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-nphb9\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-21T19:51:32Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-21T19:51:35Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-21T19:51:35Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-08-21T19:51:32Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://2bbfdd3fb7d5a58bf176dce510b37548b6a3fe553d5b6f4fd1d90df09db0d52d\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-08-21T19:51:35Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.9\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.240\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-08-21T19:51:32Z\"\n }\n}\n" STEP: replace the image in the pod Aug 21 19:51:37.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-2827' Aug 21 19:51:38.261: INFO: stderr: "" Aug 21 19:51:38.261: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Aug 21 19:51:38.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-2827' Aug 21 19:51:41.831: INFO: stderr: "" Aug 21 19:51:41.831: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:51:41.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2827" for this suite. Aug 21 19:51:47.842: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:51:47.923: INFO: namespace kubectl-2827 deletion completed in 6.088050556s • [SLOW TEST:18.315 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:51:47.923: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-nqpg STEP: Creating a pod to test atomic-volume-subpath Aug 21 19:51:48.006: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-nqpg" in namespace "subpath-8713" to be "success or failure" Aug 21 19:51:48.013: INFO: Pod "pod-subpath-test-configmap-nqpg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.85273ms Aug 21 19:51:50.016: INFO: Pod "pod-subpath-test-configmap-nqpg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009977198s Aug 21 19:51:52.021: INFO: Pod "pod-subpath-test-configmap-nqpg": Phase="Running", Reason="", readiness=true. Elapsed: 4.014397866s Aug 21 19:51:54.025: INFO: Pod "pod-subpath-test-configmap-nqpg": Phase="Running", Reason="", readiness=true. Elapsed: 6.018401456s Aug 21 19:51:56.029: INFO: Pod "pod-subpath-test-configmap-nqpg": Phase="Running", Reason="", readiness=true. Elapsed: 8.022467761s Aug 21 19:51:58.033: INFO: Pod "pod-subpath-test-configmap-nqpg": Phase="Running", Reason="", readiness=true. Elapsed: 10.026760106s Aug 21 19:52:00.038: INFO: Pod "pod-subpath-test-configmap-nqpg": Phase="Running", Reason="", readiness=true. Elapsed: 12.031571754s Aug 21 19:52:02.042: INFO: Pod "pod-subpath-test-configmap-nqpg": Phase="Running", Reason="", readiness=true. Elapsed: 14.035727436s Aug 21 19:52:04.046: INFO: Pod "pod-subpath-test-configmap-nqpg": Phase="Running", Reason="", readiness=true. Elapsed: 16.039757703s Aug 21 19:52:06.050: INFO: Pod "pod-subpath-test-configmap-nqpg": Phase="Running", Reason="", readiness=true. Elapsed: 18.044178136s Aug 21 19:52:08.055: INFO: Pod "pod-subpath-test-configmap-nqpg": Phase="Running", Reason="", readiness=true. Elapsed: 20.04854342s Aug 21 19:52:10.058: INFO: Pod "pod-subpath-test-configmap-nqpg": Phase="Running", Reason="", readiness=true. Elapsed: 22.052250364s Aug 21 19:52:12.062: INFO: Pod "pod-subpath-test-configmap-nqpg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.056303644s STEP: Saw pod success Aug 21 19:52:12.063: INFO: Pod "pod-subpath-test-configmap-nqpg" satisfied condition "success or failure" Aug 21 19:52:12.066: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-nqpg container test-container-subpath-configmap-nqpg: STEP: delete the pod Aug 21 19:52:12.088: INFO: Waiting for pod pod-subpath-test-configmap-nqpg to disappear Aug 21 19:52:12.154: INFO: Pod pod-subpath-test-configmap-nqpg no longer exists STEP: Deleting pod pod-subpath-test-configmap-nqpg Aug 21 19:52:12.154: INFO: Deleting pod "pod-subpath-test-configmap-nqpg" in namespace "subpath-8713" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:52:12.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8713" for this suite. Aug 21 19:52:18.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:52:18.489: INFO: namespace subpath-8713 deletion completed in 6.289768131s • [SLOW TEST:30.566 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:52:18.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Aug 21 19:52:18.605: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3126" to be "success or failure" Aug 21 19:52:18.621: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 15.902518ms Aug 21 19:52:20.645: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03978098s Aug 21 19:52:22.649: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043840621s Aug 21 19:52:24.653: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048213142s STEP: Saw pod success Aug 21 19:52:24.653: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Aug 21 19:52:24.657: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Aug 21 19:52:24.683: INFO: Waiting for pod pod-host-path-test to disappear Aug 21 19:52:24.687: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:52:24.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3126" for this suite. Aug 21 19:52:30.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:52:30.815: INFO: namespace hostpath-3126 deletion completed in 6.124588549s • [SLOW TEST:12.325 seconds] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:52:30.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Aug 21 19:52:30.926: INFO: Pod name pod-release: Found 0 pods out of 1 Aug 21 19:52:35.930: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:52:36.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5759" for this suite. Aug 21 19:52:43.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:52:43.223: INFO: namespace replication-controller-5759 deletion completed in 6.243676448s • [SLOW TEST:12.406 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:52:43.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 19:52:43.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Aug 21 19:52:43.661: INFO: stderr: "" Aug 21 19:52:43.661: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-07-19T21:08:45Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:52:43.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2832" for this suite. Aug 21 19:52:49.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:52:49.748: INFO: namespace kubectl-2832 deletion completed in 6.083895862s • [SLOW TEST:6.525 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:52:49.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Aug 21 19:52:49.821: INFO: Waiting up to 5m0s for pod "pod-92bf3110-e6d7-4c56-b64c-c8a934139610" in namespace "emptydir-1480" to be "success or failure" Aug 21 19:52:49.825: INFO: Pod "pod-92bf3110-e6d7-4c56-b64c-c8a934139610": Phase="Pending", Reason="", readiness=false. Elapsed: 3.346348ms Aug 21 19:52:51.828: INFO: Pod "pod-92bf3110-e6d7-4c56-b64c-c8a934139610": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006134285s Aug 21 19:52:53.832: INFO: Pod "pod-92bf3110-e6d7-4c56-b64c-c8a934139610": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010635981s STEP: Saw pod success Aug 21 19:52:53.832: INFO: Pod "pod-92bf3110-e6d7-4c56-b64c-c8a934139610" satisfied condition "success or failure" Aug 21 19:52:53.835: INFO: Trying to get logs from node iruya-worker2 pod pod-92bf3110-e6d7-4c56-b64c-c8a934139610 container test-container: STEP: delete the pod Aug 21 19:52:53.877: INFO: Waiting for pod pod-92bf3110-e6d7-4c56-b64c-c8a934139610 to disappear Aug 21 19:52:53.902: INFO: Pod pod-92bf3110-e6d7-4c56-b64c-c8a934139610 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:52:53.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1480" for this suite. Aug 21 19:52:59.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:52:59.990: INFO: namespace emptydir-1480 deletion completed in 6.084492442s • [SLOW TEST:10.241 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:52:59.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 19:53:00.095: INFO: Create a RollingUpdate DaemonSet Aug 21 19:53:00.098: INFO: Check that daemon pods launch on every node of the cluster Aug 21 19:53:00.101: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:53:00.117: INFO: Number of nodes with available pods: 0 Aug 21 19:53:00.117: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:53:01.122: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:53:01.124: INFO: Number of nodes with available pods: 0 Aug 21 19:53:01.125: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:53:02.228: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:53:02.231: INFO: Number of nodes with available pods: 0 Aug 21 19:53:02.231: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:53:03.282: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:53:03.286: INFO: Number of nodes with available pods: 0 Aug 21 19:53:03.286: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:53:04.121: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:53:04.124: INFO: Number of nodes with available pods: 1 Aug 21 19:53:04.124: INFO: Node iruya-worker is running more than one daemon pod Aug 21 19:53:05.122: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:53:05.126: INFO: Number of nodes with available pods: 2 Aug 21 19:53:05.126: INFO: Number of running nodes: 2, number of available pods: 2 Aug 21 19:53:05.126: INFO: Update the DaemonSet to trigger a rollout Aug 21 19:53:05.132: INFO: Updating DaemonSet daemon-set Aug 21 19:53:14.154: INFO: Roll back the DaemonSet before rollout is complete Aug 21 19:53:14.161: INFO: Updating DaemonSet daemon-set Aug 21 19:53:14.161: INFO: Make sure DaemonSet rollback is complete Aug 21 19:53:14.186: INFO: Wrong image for pod: daemon-set-qpnn9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Aug 21 19:53:14.186: INFO: Pod daemon-set-qpnn9 is not available Aug 21 19:53:14.202: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:53:15.206: INFO: Wrong image for pod: daemon-set-qpnn9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Aug 21 19:53:15.206: INFO: Pod daemon-set-qpnn9 is not available Aug 21 19:53:15.210: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:53:16.206: INFO: Wrong image for pod: daemon-set-qpnn9. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Aug 21 19:53:16.206: INFO: Pod daemon-set-qpnn9 is not available Aug 21 19:53:16.210: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Aug 21 19:53:17.206: INFO: Pod daemon-set-6flh6 is not available Aug 21 19:53:17.210: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1764, will wait for the garbage collector to delete the pods Aug 21 19:53:17.274: INFO: Deleting DaemonSet.extensions daemon-set took: 5.190078ms Aug 21 19:53:17.574: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.311404ms Aug 21 19:53:20.190: INFO: Number of nodes with available pods: 0 Aug 21 19:53:20.190: INFO: Number of running nodes: 0, number of available pods: 0 Aug 21 19:53:20.193: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1764/daemonsets","resourceVersion":"1628188"},"items":null} Aug 21 19:53:20.194: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1764/pods","resourceVersion":"1628188"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:53:20.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1764" for this suite. Aug 21 19:53:26.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:53:26.291: INFO: namespace daemonsets-1764 deletion completed in 6.086589871s • [SLOW TEST:26.301 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:53:26.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2403.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2403.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2403.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2403.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 21 19:53:32.396: INFO: DNS probes using dns-test-221415c6-53e7-4bf9-a32d-7b7f12980a9e succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2403.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2403.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2403.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2403.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 21 19:53:40.486: INFO: DNS probes using dns-test-b06fbda3-cf32-44c1-9e87-ec488e5ee880 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2403.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2403.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2403.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2403.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Aug 21 19:53:46.638: INFO: DNS probes using dns-test-53fc44eb-06ac-46c4-9691-e090d890db6c succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:53:46.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2403" for this suite. Aug 21 19:53:52.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:53:52.872: INFO: namespace dns-2403 deletion completed in 6.10303441s • [SLOW TEST:26.581 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:53:52.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Aug 21 19:53:52.965: INFO: Waiting up to 5m0s for pod "pod-38429108-e3c2-49bb-b878-a95f2ec1678f" in namespace "emptydir-5775" to be "success or failure" Aug 21 19:53:52.968: INFO: Pod "pod-38429108-e3c2-49bb-b878-a95f2ec1678f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.790739ms Aug 21 19:53:54.972: INFO: Pod "pod-38429108-e3c2-49bb-b878-a95f2ec1678f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007129573s Aug 21 19:53:56.976: INFO: Pod "pod-38429108-e3c2-49bb-b878-a95f2ec1678f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011286348s STEP: Saw pod success Aug 21 19:53:56.976: INFO: Pod "pod-38429108-e3c2-49bb-b878-a95f2ec1678f" satisfied condition "success or failure" Aug 21 19:53:56.979: INFO: Trying to get logs from node iruya-worker2 pod pod-38429108-e3c2-49bb-b878-a95f2ec1678f container test-container: STEP: delete the pod Aug 21 19:53:57.016: INFO: Waiting for pod pod-38429108-e3c2-49bb-b878-a95f2ec1678f to disappear Aug 21 19:53:57.028: INFO: Pod pod-38429108-e3c2-49bb-b878-a95f2ec1678f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:53:57.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5775" for this suite. Aug 21 19:54:03.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:54:03.121: INFO: namespace emptydir-5775 deletion completed in 6.089065931s • [SLOW TEST:10.249 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:54:03.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 21 19:54:03.212: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b1905914-9e1f-4ee2-911c-b6092550bb20" in namespace "projected-5582" to be "success or failure" Aug 21 19:54:03.235: INFO: Pod "downwardapi-volume-b1905914-9e1f-4ee2-911c-b6092550bb20": Phase="Pending", Reason="", readiness=false. Elapsed: 23.269486ms Aug 21 19:54:05.239: INFO: Pod "downwardapi-volume-b1905914-9e1f-4ee2-911c-b6092550bb20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027576463s Aug 21 19:54:07.243: INFO: Pod "downwardapi-volume-b1905914-9e1f-4ee2-911c-b6092550bb20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031323514s STEP: Saw pod success Aug 21 19:54:07.243: INFO: Pod "downwardapi-volume-b1905914-9e1f-4ee2-911c-b6092550bb20" satisfied condition "success or failure" Aug 21 19:54:07.245: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b1905914-9e1f-4ee2-911c-b6092550bb20 container client-container: STEP: delete the pod Aug 21 19:54:07.325: INFO: Waiting for pod downwardapi-volume-b1905914-9e1f-4ee2-911c-b6092550bb20 to disappear Aug 21 19:54:07.437: INFO: Pod downwardapi-volume-b1905914-9e1f-4ee2-911c-b6092550bb20 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:54:07.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5582" for this suite. Aug 21 19:54:13.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:54:13.568: INFO: namespace projected-5582 deletion completed in 6.127759688s • [SLOW TEST:10.447 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:54:13.569: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-e07a0725-5004-44c9-bd4b-a52ec43812aa STEP: Creating a pod to test consume secrets Aug 21 19:54:13.630: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-84ea2952-fd42-48ef-a61f-a92fc4988053" in namespace "projected-7593" to be "success or failure" Aug 21 19:54:13.641: INFO: Pod "pod-projected-secrets-84ea2952-fd42-48ef-a61f-a92fc4988053": Phase="Pending", Reason="", readiness=false. Elapsed: 11.052538ms Aug 21 19:54:15.646: INFO: Pod "pod-projected-secrets-84ea2952-fd42-48ef-a61f-a92fc4988053": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016793507s Aug 21 19:54:17.650: INFO: Pod "pod-projected-secrets-84ea2952-fd42-48ef-a61f-a92fc4988053": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020777252s STEP: Saw pod success Aug 21 19:54:17.651: INFO: Pod "pod-projected-secrets-84ea2952-fd42-48ef-a61f-a92fc4988053" satisfied condition "success or failure" Aug 21 19:54:17.653: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-84ea2952-fd42-48ef-a61f-a92fc4988053 container secret-volume-test: STEP: delete the pod Aug 21 19:54:17.698: INFO: Waiting for pod pod-projected-secrets-84ea2952-fd42-48ef-a61f-a92fc4988053 to disappear Aug 21 19:54:17.719: INFO: Pod pod-projected-secrets-84ea2952-fd42-48ef-a61f-a92fc4988053 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:54:17.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7593" for this suite. Aug 21 19:54:23.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:54:23.850: INFO: namespace projected-7593 deletion completed in 6.127922468s • [SLOW TEST:10.281 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:54:23.850: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-5ceff98d-1b03-4d2d-b88e-ec304ae8c87c STEP: Creating a pod to test consume configMaps Aug 21 19:54:23.946: INFO: Waiting up to 5m0s for pod "pod-configmaps-a60634a9-9b2c-427b-ada8-9fedb6f86569" in namespace "configmap-7744" to be "success or failure" Aug 21 19:54:23.949: INFO: Pod "pod-configmaps-a60634a9-9b2c-427b-ada8-9fedb6f86569": Phase="Pending", Reason="", readiness=false. Elapsed: 2.906991ms Aug 21 19:54:25.953: INFO: Pod "pod-configmaps-a60634a9-9b2c-427b-ada8-9fedb6f86569": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006772848s Aug 21 19:54:27.957: INFO: Pod "pod-configmaps-a60634a9-9b2c-427b-ada8-9fedb6f86569": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01097999s STEP: Saw pod success Aug 21 19:54:27.957: INFO: Pod "pod-configmaps-a60634a9-9b2c-427b-ada8-9fedb6f86569" satisfied condition "success or failure" Aug 21 19:54:27.961: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a60634a9-9b2c-427b-ada8-9fedb6f86569 container configmap-volume-test: STEP: delete the pod Aug 21 19:54:28.446: INFO: Waiting for pod pod-configmaps-a60634a9-9b2c-427b-ada8-9fedb6f86569 to disappear Aug 21 19:54:28.484: INFO: Pod pod-configmaps-a60634a9-9b2c-427b-ada8-9fedb6f86569 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:54:28.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7744" for this suite. Aug 21 19:54:34.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:54:34.668: INFO: namespace configmap-7744 deletion completed in 6.159631615s • [SLOW TEST:10.818 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:54:34.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:54:41.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1562" for this suite. Aug 21 19:55:03.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:55:03.918: INFO: namespace replication-controller-1562 deletion completed in 22.096348406s • [SLOW TEST:29.249 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:55:03.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:55:09.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9639" for this suite. Aug 21 19:55:15.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:55:15.656: INFO: namespace watch-9639 deletion completed in 6.18160018s • [SLOW TEST:11.738 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:55:15.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-f53f24c4-9c1f-4153-a389-5f0078290d87 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:55:15.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1776" for this suite. Aug 21 19:55:21.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:55:21.860: INFO: namespace configmap-1776 deletion completed in 6.122028432s • [SLOW TEST:6.204 seconds] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:55:21.861: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Aug 21 19:55:21.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-398' Aug 21 19:55:22.177: INFO: stderr: "" Aug 21 19:55:22.177: INFO: stdout: "pod/pause created\n" Aug 21 19:55:22.177: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Aug 21 19:55:22.177: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-398" to be "running and ready" Aug 21 19:55:22.198: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 20.777872ms Aug 21 19:55:24.204: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027000085s Aug 21 19:55:26.208: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.030821405s Aug 21 19:55:26.208: INFO: Pod "pause" satisfied condition "running and ready" Aug 21 19:55:26.208: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Aug 21 19:55:26.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-398' Aug 21 19:55:26.313: INFO: stderr: "" Aug 21 19:55:26.313: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Aug 21 19:55:26.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-398' Aug 21 19:55:26.392: INFO: stderr: "" Aug 21 19:55:26.392: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Aug 21 19:55:26.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-398' Aug 21 19:55:26.482: INFO: stderr: "" Aug 21 19:55:26.482: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Aug 21 19:55:26.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-398' Aug 21 19:55:26.571: INFO: stderr: "" Aug 21 19:55:26.571: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Aug 21 19:55:26.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-398' Aug 21 19:55:26.709: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 21 19:55:26.709: INFO: stdout: "pod \"pause\" force deleted\n" Aug 21 19:55:26.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-398' Aug 21 19:55:26.801: INFO: stderr: "No resources found.\n" Aug 21 19:55:26.801: INFO: stdout: "" Aug 21 19:55:26.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-398 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 21 19:55:26.888: INFO: stderr: "" Aug 21 19:55:26.888: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:55:26.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-398" for this suite. Aug 21 19:55:32.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:55:33.031: INFO: namespace kubectl-398 deletion completed in 6.139647731s • [SLOW TEST:11.170 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:55:33.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Aug 21 19:55:33.093: INFO: Waiting up to 5m0s for pod "pod-e9518dd7-ce6c-4312-8e50-801677cc9644" in namespace "emptydir-631" to be "success or failure" Aug 21 19:55:33.111: INFO: Pod "pod-e9518dd7-ce6c-4312-8e50-801677cc9644": Phase="Pending", Reason="", readiness=false. Elapsed: 18.087749ms Aug 21 19:55:35.115: INFO: Pod "pod-e9518dd7-ce6c-4312-8e50-801677cc9644": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021674099s Aug 21 19:55:37.118: INFO: Pod "pod-e9518dd7-ce6c-4312-8e50-801677cc9644": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025215805s STEP: Saw pod success Aug 21 19:55:37.118: INFO: Pod "pod-e9518dd7-ce6c-4312-8e50-801677cc9644" satisfied condition "success or failure" Aug 21 19:55:37.121: INFO: Trying to get logs from node iruya-worker2 pod pod-e9518dd7-ce6c-4312-8e50-801677cc9644 container test-container: STEP: delete the pod Aug 21 19:55:37.152: INFO: Waiting for pod pod-e9518dd7-ce6c-4312-8e50-801677cc9644 to disappear Aug 21 19:55:37.163: INFO: Pod pod-e9518dd7-ce6c-4312-8e50-801677cc9644 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:55:37.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-631" for this suite. Aug 21 19:55:43.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:55:43.274: INFO: namespace emptydir-631 deletion completed in 6.108448074s • [SLOW TEST:10.243 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:55:43.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-pqpw7 in namespace proxy-14 I0821 19:55:43.415646 6 runners.go:180] Created replication controller with name: proxy-service-pqpw7, namespace: proxy-14, replica count: 1 I0821 19:55:44.466088 6 runners.go:180] proxy-service-pqpw7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 19:55:45.466336 6 runners.go:180] proxy-service-pqpw7 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0821 19:55:46.466552 6 runners.go:180] proxy-service-pqpw7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0821 19:55:47.466812 6 runners.go:180] proxy-service-pqpw7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0821 19:55:48.467057 6 runners.go:180] proxy-service-pqpw7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0821 19:55:49.467266 6 runners.go:180] proxy-service-pqpw7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0821 19:55:50.467505 6 runners.go:180] proxy-service-pqpw7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0821 19:55:51.467701 6 runners.go:180] proxy-service-pqpw7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0821 19:55:52.467903 6 runners.go:180] proxy-service-pqpw7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0821 19:55:53.468132 6 runners.go:180] proxy-service-pqpw7 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0821 19:55:54.468382 6 runners.go:180] proxy-service-pqpw7 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Aug 21 19:55:54.471: INFO: setup took 11.139648579s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Aug 21 19:55:54.481: INFO: (0) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname2/proxy/: bar (200; 9.040309ms) Aug 21 19:55:54.481: INFO: (0) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts/proxy/: test (200; 9.054388ms) Aug 21 19:55:54.481: INFO: (0) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 9.231083ms) Aug 21 19:55:54.481: INFO: (0) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 9.476582ms) Aug 21 19:55:54.481: INFO: (0) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname1/proxy/: foo (200; 9.459981ms) Aug 21 19:55:54.481: INFO: (0) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname1/proxy/: foo (200; 9.47539ms) Aug 21 19:55:54.481: INFO: (0) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:1080/proxy/: testte... (200; 9.495717ms) Aug 21 19:55:54.481: INFO: (0) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 9.587622ms) Aug 21 19:55:54.488: INFO: (0) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:460/proxy/: tls baz (200; 16.607117ms) Aug 21 19:55:54.488: INFO: (0) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname2/proxy/: tls qux (200; 16.59711ms) Aug 21 19:55:54.488: INFO: (0) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:462/proxy/: tls qux (200; 16.434729ms) Aug 21 19:55:54.488: INFO: (0) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname1/proxy/: tls baz (200; 16.52941ms) Aug 21 19:55:54.488: INFO: (0) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:443/proxy/: testte... (200; 7.347921ms) Aug 21 19:55:54.496: INFO: (1) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:443/proxy/: test (200; 7.334351ms) Aug 21 19:55:54.498: INFO: (1) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname2/proxy/: bar (200; 9.874399ms) Aug 21 19:55:54.499: INFO: (1) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname1/proxy/: foo (200; 9.985901ms) Aug 21 19:55:54.499: INFO: (1) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname1/proxy/: tls baz (200; 10.253844ms) Aug 21 19:55:54.501: INFO: (1) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname2/proxy/: bar (200; 12.703918ms) Aug 21 19:55:54.501: INFO: (1) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname2/proxy/: tls qux (200; 12.716606ms) Aug 21 19:55:54.502: INFO: (1) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname1/proxy/: foo (200; 12.908345ms) Aug 21 19:55:54.505: INFO: (2) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts/proxy/: test (200; 3.298708ms) Aug 21 19:55:54.505: INFO: (2) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:1080/proxy/: testte... (200; 3.506014ms) Aug 21 19:55:54.505: INFO: (2) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname1/proxy/: tls baz (200; 3.617552ms) Aug 21 19:55:54.505: INFO: (2) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:460/proxy/: tls baz (200; 3.626502ms) Aug 21 19:55:54.506: INFO: (2) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 4.637667ms) Aug 21 19:55:54.506: INFO: (2) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 4.598934ms) Aug 21 19:55:54.506: INFO: (2) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname1/proxy/: foo (200; 4.760115ms) Aug 21 19:55:54.506: INFO: (2) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname2/proxy/: bar (200; 4.705398ms) Aug 21 19:55:54.506: INFO: (2) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname2/proxy/: tls qux (200; 4.747231ms) Aug 21 19:55:54.507: INFO: (2) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname2/proxy/: bar (200; 4.977959ms) Aug 21 19:55:54.507: INFO: (2) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname1/proxy/: foo (200; 4.988726ms) Aug 21 19:55:54.511: INFO: (3) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 3.747747ms) Aug 21 19:55:54.511: INFO: (3) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:460/proxy/: tls baz (200; 3.77472ms) Aug 21 19:55:54.511: INFO: (3) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 3.818037ms) Aug 21 19:55:54.511: INFO: (3) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:462/proxy/: tls qux (200; 3.821496ms) Aug 21 19:55:54.511: INFO: (3) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:443/proxy/: testtest (200; 4.090867ms) Aug 21 19:55:54.511: INFO: (3) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:1080/proxy/: te... (200; 4.146323ms) Aug 21 19:55:54.512: INFO: (3) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname2/proxy/: tls qux (200; 5.171221ms) Aug 21 19:55:54.512: INFO: (3) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname2/proxy/: bar (200; 5.345038ms) Aug 21 19:55:54.513: INFO: (3) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname1/proxy/: tls baz (200; 5.7963ms) Aug 21 19:55:54.513: INFO: (3) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname1/proxy/: foo (200; 5.920682ms) Aug 21 19:55:54.513: INFO: (3) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname1/proxy/: foo (200; 5.980608ms) Aug 21 19:55:54.513: INFO: (3) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname2/proxy/: bar (200; 5.947568ms) Aug 21 19:55:54.516: INFO: (4) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:443/proxy/: te... (200; 4.180902ms) Aug 21 19:55:54.518: INFO: (4) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 4.657815ms) Aug 21 19:55:54.518: INFO: (4) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname2/proxy/: bar (200; 4.999368ms) Aug 21 19:55:54.518: INFO: (4) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 5.040734ms) Aug 21 19:55:54.518: INFO: (4) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname1/proxy/: tls baz (200; 5.033861ms) Aug 21 19:55:54.518: INFO: (4) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname2/proxy/: bar (200; 5.140933ms) Aug 21 19:55:54.518: INFO: (4) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 5.126489ms) Aug 21 19:55:54.518: INFO: (4) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts/proxy/: test (200; 5.222916ms) Aug 21 19:55:54.518: INFO: (4) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:460/proxy/: tls baz (200; 5.351401ms) Aug 21 19:55:54.518: INFO: (4) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:1080/proxy/: testtestte... (200; 3.131196ms) Aug 21 19:55:54.522: INFO: (5) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts/proxy/: test (200; 3.329187ms) Aug 21 19:55:54.522: INFO: (5) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:462/proxy/: tls qux (200; 3.273334ms) Aug 21 19:55:54.522: INFO: (5) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname2/proxy/: tls qux (200; 3.370759ms) Aug 21 19:55:54.522: INFO: (5) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 3.400673ms) Aug 21 19:55:54.522: INFO: (5) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 3.314423ms) Aug 21 19:55:54.522: INFO: (5) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:460/proxy/: tls baz (200; 3.40236ms) Aug 21 19:55:54.522: INFO: (5) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:443/proxy/: te... (200; 2.264552ms) Aug 21 19:55:54.526: INFO: (6) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 2.240433ms) Aug 21 19:55:54.526: INFO: (6) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 2.399009ms) Aug 21 19:55:54.526: INFO: (6) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:1080/proxy/: testtest (200; 4.304087ms) Aug 21 19:55:54.528: INFO: (6) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname2/proxy/: bar (200; 4.363792ms) Aug 21 19:55:54.528: INFO: (6) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname2/proxy/: tls qux (200; 4.370826ms) Aug 21 19:55:54.528: INFO: (6) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname1/proxy/: tls baz (200; 4.438702ms) Aug 21 19:55:54.528: INFO: (6) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:462/proxy/: tls qux (200; 4.54198ms) Aug 21 19:55:54.528: INFO: (6) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:460/proxy/: tls baz (200; 4.49302ms) Aug 21 19:55:54.531: INFO: (7) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts/proxy/: test (200; 2.762436ms) Aug 21 19:55:54.531: INFO: (7) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:460/proxy/: tls baz (200; 2.922833ms) Aug 21 19:55:54.531: INFO: (7) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:443/proxy/: te... (200; 3.779567ms) Aug 21 19:55:54.532: INFO: (7) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 3.72695ms) Aug 21 19:55:54.532: INFO: (7) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 3.92796ms) Aug 21 19:55:54.532: INFO: (7) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:1080/proxy/: testtest (200; 4.08558ms) Aug 21 19:55:54.538: INFO: (8) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname2/proxy/: tls qux (200; 4.156284ms) Aug 21 19:55:54.538: INFO: (8) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname1/proxy/: foo (200; 4.128201ms) Aug 21 19:55:54.538: INFO: (8) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:1080/proxy/: testte... (200; 4.297957ms) Aug 21 19:55:54.538: INFO: (8) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 4.235875ms) Aug 21 19:55:54.538: INFO: (8) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:462/proxy/: tls qux (200; 4.268203ms) Aug 21 19:55:54.538: INFO: (8) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:460/proxy/: tls baz (200; 4.330645ms) Aug 21 19:55:54.538: INFO: (8) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname1/proxy/: tls baz (200; 4.245703ms) Aug 21 19:55:54.541: INFO: (9) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:1080/proxy/: testte... (200; 1.972398ms) Aug 21 19:55:54.542: INFO: (9) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname1/proxy/: foo (200; 3.518155ms) Aug 21 19:55:54.542: INFO: (9) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname1/proxy/: tls baz (200; 3.998274ms) Aug 21 19:55:54.542: INFO: (9) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname2/proxy/: tls qux (200; 3.886833ms) Aug 21 19:55:54.543: INFO: (9) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname2/proxy/: bar (200; 3.859796ms) Aug 21 19:55:54.543: INFO: (9) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 3.235551ms) Aug 21 19:55:54.543: INFO: (9) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:462/proxy/: tls qux (200; 3.706609ms) Aug 21 19:55:54.543: INFO: (9) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 3.191532ms) Aug 21 19:55:54.543: INFO: (9) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 3.372262ms) Aug 21 19:55:54.543: INFO: (9) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts/proxy/: test (200; 3.524374ms) Aug 21 19:55:54.543: INFO: (9) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname1/proxy/: foo (200; 3.688407ms) Aug 21 19:55:54.543: INFO: (9) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname2/proxy/: bar (200; 4.169213ms) Aug 21 19:55:54.543: INFO: (9) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:460/proxy/: tls baz (200; 3.76714ms) Aug 21 19:55:54.543: INFO: (9) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:443/proxy/: test (200; 3.867188ms) Aug 21 19:55:54.547: INFO: (10) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:1080/proxy/: testte... (200; 4.176553ms) Aug 21 19:55:54.547: INFO: (10) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname1/proxy/: tls baz (200; 4.341691ms) Aug 21 19:55:54.547: INFO: (10) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 4.349881ms) Aug 21 19:55:54.547: INFO: (10) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname2/proxy/: bar (200; 4.298283ms) Aug 21 19:55:54.547: INFO: (10) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname2/proxy/: tls qux (200; 4.380418ms) Aug 21 19:55:54.551: INFO: (11) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 2.947294ms) Aug 21 19:55:54.551: INFO: (11) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:462/proxy/: tls qux (200; 2.967158ms) Aug 21 19:55:54.551: INFO: (11) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts/proxy/: test (200; 2.975796ms) Aug 21 19:55:54.551: INFO: (11) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:1080/proxy/: testte... (200; 3.00225ms) Aug 21 19:55:54.551: INFO: (11) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:460/proxy/: tls baz (200; 3.00326ms) Aug 21 19:55:54.551: INFO: (11) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 3.235432ms) Aug 21 19:55:54.551: INFO: (11) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 3.285883ms) Aug 21 19:55:54.551: INFO: (11) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 3.380956ms) Aug 21 19:55:54.551: INFO: (11) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname2/proxy/: bar (200; 3.456722ms) Aug 21 19:55:54.551: INFO: (11) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname1/proxy/: tls baz (200; 3.714815ms) Aug 21 19:55:54.552: INFO: (11) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname1/proxy/: foo (200; 3.944689ms) Aug 21 19:55:54.552: INFO: (11) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname2/proxy/: bar (200; 4.08938ms) Aug 21 19:55:54.552: INFO: (11) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname1/proxy/: foo (200; 4.105199ms) Aug 21 19:55:54.552: INFO: (11) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname2/proxy/: tls qux (200; 4.377975ms) Aug 21 19:55:54.554: INFO: (12) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:462/proxy/: tls qux (200; 1.88308ms) Aug 21 19:55:54.555: INFO: (12) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 2.8867ms) Aug 21 19:55:54.555: INFO: (12) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 2.887798ms) Aug 21 19:55:54.555: INFO: (12) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname1/proxy/: tls baz (200; 3.139808ms) Aug 21 19:55:54.555: INFO: (12) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:1080/proxy/: te... (200; 3.182119ms) Aug 21 19:55:54.555: INFO: (12) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 3.483902ms) Aug 21 19:55:54.556: INFO: (12) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:443/proxy/: test (200; 3.884561ms) Aug 21 19:55:54.556: INFO: (12) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname2/proxy/: bar (200; 3.859221ms) Aug 21 19:55:54.556: INFO: (12) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:1080/proxy/: testtestte... (200; 3.19389ms) Aug 21 19:55:54.559: INFO: (13) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 3.316629ms) Aug 21 19:55:54.559: INFO: (13) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts/proxy/: test (200; 3.342123ms) Aug 21 19:55:54.560: INFO: (13) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 3.409087ms) Aug 21 19:55:54.560: INFO: (13) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:443/proxy/: te... (200; 2.938009ms) Aug 21 19:55:54.564: INFO: (14) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts/proxy/: test (200; 2.923764ms) Aug 21 19:55:54.564: INFO: (14) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 2.945987ms) Aug 21 19:55:54.564: INFO: (14) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:460/proxy/: tls baz (200; 2.93608ms) Aug 21 19:55:54.564: INFO: (14) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:1080/proxy/: testtest (200; 2.475941ms) Aug 21 19:55:54.569: INFO: (15) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 4.323457ms) Aug 21 19:55:54.569: INFO: (15) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:1080/proxy/: testte... (200; 4.648252ms) Aug 21 19:55:54.570: INFO: (15) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname1/proxy/: foo (200; 4.672414ms) Aug 21 19:55:54.570: INFO: (15) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:443/proxy/: testte... (200; 2.786094ms) Aug 21 19:55:54.573: INFO: (16) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 2.965333ms) Aug 21 19:55:54.573: INFO: (16) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:460/proxy/: tls baz (200; 3.025795ms) Aug 21 19:55:54.573: INFO: (16) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 3.223188ms) Aug 21 19:55:54.573: INFO: (16) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:462/proxy/: tls qux (200; 3.289857ms) Aug 21 19:55:54.574: INFO: (16) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts/proxy/: test (200; 3.381669ms) Aug 21 19:55:54.574: INFO: (16) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 3.439525ms) Aug 21 19:55:54.574: INFO: (16) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 3.591349ms) Aug 21 19:55:54.574: INFO: (16) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname1/proxy/: foo (200; 3.569397ms) Aug 21 19:55:54.574: INFO: (16) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:443/proxy/: te... (200; 3.398683ms) Aug 21 19:55:54.578: INFO: (17) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 3.20947ms) Aug 21 19:55:54.578: INFO: (17) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts/proxy/: test (200; 3.417617ms) Aug 21 19:55:54.578: INFO: (17) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname2/proxy/: tls qux (200; 3.710235ms) Aug 21 19:55:54.578: INFO: (17) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 3.624109ms) Aug 21 19:55:54.578: INFO: (17) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 3.853699ms) Aug 21 19:55:54.578: INFO: (17) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:462/proxy/: tls qux (200; 3.914184ms) Aug 21 19:55:54.578: INFO: (17) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname2/proxy/: bar (200; 4.043431ms) Aug 21 19:55:54.578: INFO: (17) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:1080/proxy/: testtestte... (200; 2.62498ms) Aug 21 19:55:54.582: INFO: (18) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:462/proxy/: tls qux (200; 2.992466ms) Aug 21 19:55:54.582: INFO: (18) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:443/proxy/: test (200; 3.155741ms) Aug 21 19:55:54.583: INFO: (18) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname2/proxy/: tls qux (200; 3.714797ms) Aug 21 19:55:54.583: INFO: (18) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname1/proxy/: foo (200; 3.766253ms) Aug 21 19:55:54.583: INFO: (18) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname2/proxy/: bar (200; 3.735299ms) Aug 21 19:55:54.583: INFO: (18) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname2/proxy/: bar (200; 3.776799ms) Aug 21 19:55:54.583: INFO: (18) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname1/proxy/: tls baz (200; 3.723636ms) Aug 21 19:55:54.585: INFO: (19) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:1080/proxy/: te... (200; 2.491691ms) Aug 21 19:55:54.585: INFO: (19) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 2.485886ms) Aug 21 19:55:54.585: INFO: (19) /api/v1/namespaces/proxy-14/services/proxy-service-pqpw7:portname1/proxy/: foo (200; 2.732198ms) Aug 21 19:55:54.587: INFO: (19) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:160/proxy/: foo (200; 4.318662ms) Aug 21 19:55:54.587: INFO: (19) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname1/proxy/: tls baz (200; 4.665884ms) Aug 21 19:55:54.588: INFO: (19) /api/v1/namespaces/proxy-14/pods/http:proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 4.884768ms) Aug 21 19:55:54.588: INFO: (19) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:462/proxy/: tls qux (200; 5.227294ms) Aug 21 19:55:54.588: INFO: (19) /api/v1/namespaces/proxy-14/services/https:proxy-service-pqpw7:tlsportname2/proxy/: tls qux (200; 5.271075ms) Aug 21 19:55:54.588: INFO: (19) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:162/proxy/: bar (200; 5.359243ms) Aug 21 19:55:54.588: INFO: (19) /api/v1/namespaces/proxy-14/pods/proxy-service-pqpw7-9q4ts:1080/proxy/: testtest (200; 5.660905ms) Aug 21 19:55:54.588: INFO: (19) /api/v1/namespaces/proxy-14/services/http:proxy-service-pqpw7:portname2/proxy/: bar (200; 5.706258ms) Aug 21 19:55:54.588: INFO: (19) /api/v1/namespaces/proxy-14/pods/https:proxy-service-pqpw7-9q4ts:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Aug 21 19:56:03.554: INFO: Waiting up to 5m0s for pod "pod-bb488510-2f72-4319-a280-e9c021954980" in namespace "emptydir-6205" to be "success or failure" Aug 21 19:56:03.568: INFO: Pod "pod-bb488510-2f72-4319-a280-e9c021954980": Phase="Pending", Reason="", readiness=false. Elapsed: 13.71393ms Aug 21 19:56:05.586: INFO: Pod "pod-bb488510-2f72-4319-a280-e9c021954980": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0316556s Aug 21 19:56:07.654: INFO: Pod "pod-bb488510-2f72-4319-a280-e9c021954980": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.100276618s STEP: Saw pod success Aug 21 19:56:07.655: INFO: Pod "pod-bb488510-2f72-4319-a280-e9c021954980" satisfied condition "success or failure" Aug 21 19:56:07.657: INFO: Trying to get logs from node iruya-worker pod pod-bb488510-2f72-4319-a280-e9c021954980 container test-container: STEP: delete the pod Aug 21 19:56:07.725: INFO: Waiting for pod pod-bb488510-2f72-4319-a280-e9c021954980 to disappear Aug 21 19:56:07.792: INFO: Pod pod-bb488510-2f72-4319-a280-e9c021954980 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:56:07.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6205" for this suite. Aug 21 19:56:13.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:56:13.918: INFO: namespace emptydir-6205 deletion completed in 6.122319328s • [SLOW TEST:10.475 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:56:13.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:57:14.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3604" for this suite. Aug 21 19:57:36.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:57:36.098: INFO: namespace container-probe-3604 deletion completed in 22.086598766s • [SLOW TEST:82.179 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:57:36.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-8648 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8648 to expose endpoints map[] Aug 21 19:57:36.230: INFO: successfully validated that service multi-endpoint-test in namespace services-8648 exposes endpoints map[] (52.844751ms elapsed) STEP: Creating pod pod1 in namespace services-8648 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8648 to expose endpoints map[pod1:[100]] Aug 21 19:57:40.448: INFO: successfully validated that service multi-endpoint-test in namespace services-8648 exposes endpoints map[pod1:[100]] (4.21327348s elapsed) STEP: Creating pod pod2 in namespace services-8648 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8648 to expose endpoints map[pod1:[100] pod2:[101]] Aug 21 19:57:43.542: INFO: successfully validated that service multi-endpoint-test in namespace services-8648 exposes endpoints map[pod1:[100] pod2:[101]] (3.089771612s elapsed) STEP: Deleting pod pod1 in namespace services-8648 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8648 to expose endpoints map[pod2:[101]] Aug 21 19:57:44.577: INFO: successfully validated that service multi-endpoint-test in namespace services-8648 exposes endpoints map[pod2:[101]] (1.031906952s elapsed) STEP: Deleting pod pod2 in namespace services-8648 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8648 to expose endpoints map[] Aug 21 19:57:45.594: INFO: successfully validated that service multi-endpoint-test in namespace services-8648 exposes endpoints map[] (1.012244517s elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:57:45.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8648" for this suite. Aug 21 19:58:09.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:58:10.071: INFO: namespace services-8648 deletion completed in 24.190508109s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:33.973 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:58:10.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:58:14.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8076" for this suite. Aug 21 19:58:20.277: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:58:20.347: INFO: namespace kubelet-test-8076 deletion completed in 6.081407978s • [SLOW TEST:10.275 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:58:20.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-314467e7-a7bd-4cbf-8369-2b465e81ee1b STEP: Creating a pod to test consume configMaps Aug 21 19:58:20.453: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-355205b9-7295-485a-ab52-7a1792e69376" in namespace "projected-6076" to be "success or failure" Aug 21 19:58:20.456: INFO: Pod "pod-projected-configmaps-355205b9-7295-485a-ab52-7a1792e69376": Phase="Pending", Reason="", readiness=false. Elapsed: 3.015677ms Aug 21 19:58:22.459: INFO: Pod "pod-projected-configmaps-355205b9-7295-485a-ab52-7a1792e69376": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006337826s Aug 21 19:58:24.464: INFO: Pod "pod-projected-configmaps-355205b9-7295-485a-ab52-7a1792e69376": Phase="Running", Reason="", readiness=true. Elapsed: 4.010610255s Aug 21 19:58:26.468: INFO: Pod "pod-projected-configmaps-355205b9-7295-485a-ab52-7a1792e69376": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014629443s STEP: Saw pod success Aug 21 19:58:26.468: INFO: Pod "pod-projected-configmaps-355205b9-7295-485a-ab52-7a1792e69376" satisfied condition "success or failure" Aug 21 19:58:26.471: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-355205b9-7295-485a-ab52-7a1792e69376 container projected-configmap-volume-test: STEP: delete the pod Aug 21 19:58:26.491: INFO: Waiting for pod pod-projected-configmaps-355205b9-7295-485a-ab52-7a1792e69376 to disappear Aug 21 19:58:26.505: INFO: Pod pod-projected-configmaps-355205b9-7295-485a-ab52-7a1792e69376 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:58:26.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6076" for this suite. Aug 21 19:58:32.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:58:32.601: INFO: namespace projected-6076 deletion completed in 6.092490481s • [SLOW TEST:12.253 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:58:32.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Aug 21 19:58:44.779: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2419 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 19:58:44.779: INFO: >>> kubeConfig: /root/.kube/config I0821 19:58:44.820563 6 log.go:172] (0xc001e4c580) (0xc0022257c0) Create stream I0821 19:58:44.820591 6 log.go:172] (0xc001e4c580) (0xc0022257c0) Stream added, broadcasting: 1 I0821 19:58:44.822879 6 log.go:172] (0xc001e4c580) Reply frame received for 1 I0821 19:58:44.822925 6 log.go:172] (0xc001e4c580) (0xc0028cd540) Create stream I0821 19:58:44.822942 6 log.go:172] (0xc001e4c580) (0xc0028cd540) Stream added, broadcasting: 3 I0821 19:58:44.823846 6 log.go:172] (0xc001e4c580) Reply frame received for 3 I0821 19:58:44.823878 6 log.go:172] (0xc001e4c580) (0xc002225860) Create stream I0821 19:58:44.823890 6 log.go:172] (0xc001e4c580) (0xc002225860) Stream added, broadcasting: 5 I0821 19:58:44.824839 6 log.go:172] (0xc001e4c580) Reply frame received for 5 I0821 19:58:44.929208 6 log.go:172] (0xc001e4c580) Data frame received for 5 I0821 19:58:44.929269 6 log.go:172] (0xc002225860) (5) Data frame handling I0821 19:58:44.929305 6 log.go:172] (0xc001e4c580) Data frame received for 3 I0821 19:58:44.929326 6 log.go:172] (0xc0028cd540) (3) Data frame handling I0821 19:58:44.929359 6 log.go:172] (0xc0028cd540) (3) Data frame sent I0821 19:58:44.929377 6 log.go:172] (0xc001e4c580) Data frame received for 3 I0821 19:58:44.929397 6 log.go:172] (0xc0028cd540) (3) Data frame handling I0821 19:58:44.931182 6 log.go:172] (0xc001e4c580) Data frame received for 1 I0821 19:58:44.931197 6 log.go:172] (0xc0022257c0) (1) Data frame handling I0821 19:58:44.931204 6 log.go:172] (0xc0022257c0) (1) Data frame sent I0821 19:58:44.931268 6 log.go:172] (0xc001e4c580) (0xc0022257c0) Stream removed, broadcasting: 1 I0821 19:58:44.931307 6 log.go:172] (0xc001e4c580) Go away received I0821 19:58:44.931392 6 log.go:172] (0xc001e4c580) (0xc0022257c0) Stream removed, broadcasting: 1 I0821 19:58:44.931423 6 log.go:172] (0xc001e4c580) (0xc0028cd540) Stream removed, broadcasting: 3 I0821 19:58:44.931441 6 log.go:172] (0xc001e4c580) (0xc002225860) Stream removed, broadcasting: 5 Aug 21 19:58:44.931: INFO: Exec stderr: "" Aug 21 19:58:44.931: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2419 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 19:58:44.931: INFO: >>> kubeConfig: /root/.kube/config I0821 19:58:44.963461 6 log.go:172] (0xc001f22160) (0xc0023be000) Create stream I0821 19:58:44.963500 6 log.go:172] (0xc001f22160) (0xc0023be000) Stream added, broadcasting: 1 I0821 19:58:44.966210 6 log.go:172] (0xc001f22160) Reply frame received for 1 I0821 19:58:44.966254 6 log.go:172] (0xc001f22160) (0xc002225900) Create stream I0821 19:58:44.966271 6 log.go:172] (0xc001f22160) (0xc002225900) Stream added, broadcasting: 3 I0821 19:58:44.967245 6 log.go:172] (0xc001f22160) Reply frame received for 3 I0821 19:58:44.967270 6 log.go:172] (0xc001f22160) (0xc0028cd5e0) Create stream I0821 19:58:44.967276 6 log.go:172] (0xc001f22160) (0xc0028cd5e0) Stream added, broadcasting: 5 I0821 19:58:44.968417 6 log.go:172] (0xc001f22160) Reply frame received for 5 I0821 19:58:45.041885 6 log.go:172] (0xc001f22160) Data frame received for 5 I0821 19:58:45.041930 6 log.go:172] (0xc0028cd5e0) (5) Data frame handling I0821 19:58:45.041959 6 log.go:172] (0xc001f22160) Data frame received for 3 I0821 19:58:45.041976 6 log.go:172] (0xc002225900) (3) Data frame handling I0821 19:58:45.041996 6 log.go:172] (0xc002225900) (3) Data frame sent I0821 19:58:45.042011 6 log.go:172] (0xc001f22160) Data frame received for 3 I0821 19:58:45.042024 6 log.go:172] (0xc002225900) (3) Data frame handling I0821 19:58:45.043857 6 log.go:172] (0xc001f22160) Data frame received for 1 I0821 19:58:45.043941 6 log.go:172] (0xc0023be000) (1) Data frame handling I0821 19:58:45.044023 6 log.go:172] (0xc0023be000) (1) Data frame sent I0821 19:58:45.044059 6 log.go:172] (0xc001f22160) (0xc0023be000) Stream removed, broadcasting: 1 I0821 19:58:45.044097 6 log.go:172] (0xc001f22160) Go away received I0821 19:58:45.044233 6 log.go:172] (0xc001f22160) (0xc0023be000) Stream removed, broadcasting: 1 I0821 19:58:45.044273 6 log.go:172] (0xc001f22160) (0xc002225900) Stream removed, broadcasting: 3 I0821 19:58:45.044317 6 log.go:172] (0xc001f22160) (0xc0028cd5e0) Stream removed, broadcasting: 5 Aug 21 19:58:45.044: INFO: Exec stderr: "" Aug 21 19:58:45.044: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2419 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 19:58:45.044: INFO: >>> kubeConfig: /root/.kube/config I0821 19:58:45.073208 6 log.go:172] (0xc001f92f20) (0xc003ff81e0) Create stream I0821 19:58:45.073239 6 log.go:172] (0xc001f92f20) (0xc003ff81e0) Stream added, broadcasting: 1 I0821 19:58:45.075484 6 log.go:172] (0xc001f92f20) Reply frame received for 1 I0821 19:58:45.075530 6 log.go:172] (0xc001f92f20) (0xc0022259a0) Create stream I0821 19:58:45.075545 6 log.go:172] (0xc001f92f20) (0xc0022259a0) Stream added, broadcasting: 3 I0821 19:58:45.076664 6 log.go:172] (0xc001f92f20) Reply frame received for 3 I0821 19:58:45.076702 6 log.go:172] (0xc001f92f20) (0xc0023be140) Create stream I0821 19:58:45.076715 6 log.go:172] (0xc001f92f20) (0xc0023be140) Stream added, broadcasting: 5 I0821 19:58:45.077992 6 log.go:172] (0xc001f92f20) Reply frame received for 5 I0821 19:58:45.155846 6 log.go:172] (0xc001f92f20) Data frame received for 5 I0821 19:58:45.155886 6 log.go:172] (0xc0023be140) (5) Data frame handling I0821 19:58:45.155909 6 log.go:172] (0xc001f92f20) Data frame received for 3 I0821 19:58:45.155925 6 log.go:172] (0xc0022259a0) (3) Data frame handling I0821 19:58:45.155936 6 log.go:172] (0xc0022259a0) (3) Data frame sent I0821 19:58:45.155948 6 log.go:172] (0xc001f92f20) Data frame received for 3 I0821 19:58:45.155962 6 log.go:172] (0xc0022259a0) (3) Data frame handling I0821 19:58:45.157364 6 log.go:172] (0xc001f92f20) Data frame received for 1 I0821 19:58:45.157449 6 log.go:172] (0xc003ff81e0) (1) Data frame handling I0821 19:58:45.157513 6 log.go:172] (0xc003ff81e0) (1) Data frame sent I0821 19:58:45.157585 6 log.go:172] (0xc001f92f20) (0xc003ff81e0) Stream removed, broadcasting: 1 I0821 19:58:45.157628 6 log.go:172] (0xc001f92f20) Go away received I0821 19:58:45.157718 6 log.go:172] (0xc001f92f20) (0xc003ff81e0) Stream removed, broadcasting: 1 I0821 19:58:45.157748 6 log.go:172] (0xc001f92f20) (0xc0022259a0) Stream removed, broadcasting: 3 I0821 19:58:45.157757 6 log.go:172] (0xc001f92f20) (0xc0023be140) Stream removed, broadcasting: 5 Aug 21 19:58:45.157: INFO: Exec stderr: "" Aug 21 19:58:45.157: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2419 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 19:58:45.157: INFO: >>> kubeConfig: /root/.kube/config I0821 19:58:45.241141 6 log.go:172] (0xc001329080) (0xc0022e17c0) Create stream I0821 19:58:45.241180 6 log.go:172] (0xc001329080) (0xc0022e17c0) Stream added, broadcasting: 1 I0821 19:58:45.245469 6 log.go:172] (0xc001329080) Reply frame received for 1 I0821 19:58:45.245530 6 log.go:172] (0xc001329080) (0xc002225ae0) Create stream I0821 19:58:45.245552 6 log.go:172] (0xc001329080) (0xc002225ae0) Stream added, broadcasting: 3 I0821 19:58:45.246509 6 log.go:172] (0xc001329080) Reply frame received for 3 I0821 19:58:45.246539 6 log.go:172] (0xc001329080) (0xc003ff8280) Create stream I0821 19:58:45.246547 6 log.go:172] (0xc001329080) (0xc003ff8280) Stream added, broadcasting: 5 I0821 19:58:45.247323 6 log.go:172] (0xc001329080) Reply frame received for 5 I0821 19:58:45.312574 6 log.go:172] (0xc001329080) Data frame received for 5 I0821 19:58:45.312611 6 log.go:172] (0xc003ff8280) (5) Data frame handling I0821 19:58:45.312629 6 log.go:172] (0xc001329080) Data frame received for 3 I0821 19:58:45.312637 6 log.go:172] (0xc002225ae0) (3) Data frame handling I0821 19:58:45.312643 6 log.go:172] (0xc002225ae0) (3) Data frame sent I0821 19:58:45.312651 6 log.go:172] (0xc001329080) Data frame received for 3 I0821 19:58:45.312659 6 log.go:172] (0xc002225ae0) (3) Data frame handling I0821 19:58:45.314010 6 log.go:172] (0xc001329080) Data frame received for 1 I0821 19:58:45.314031 6 log.go:172] (0xc0022e17c0) (1) Data frame handling I0821 19:58:45.314044 6 log.go:172] (0xc0022e17c0) (1) Data frame sent I0821 19:58:45.314054 6 log.go:172] (0xc001329080) (0xc0022e17c0) Stream removed, broadcasting: 1 I0821 19:58:45.314116 6 log.go:172] (0xc001329080) Go away received I0821 19:58:45.314161 6 log.go:172] (0xc001329080) (0xc0022e17c0) Stream removed, broadcasting: 1 I0821 19:58:45.314181 6 log.go:172] (0xc001329080) (0xc002225ae0) Stream removed, broadcasting: 3 I0821 19:58:45.314186 6 log.go:172] (0xc001329080) (0xc003ff8280) Stream removed, broadcasting: 5 Aug 21 19:58:45.314: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Aug 21 19:58:45.314: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2419 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 19:58:45.314: INFO: >>> kubeConfig: /root/.kube/config I0821 19:58:45.358299 6 log.go:172] (0xc002ebc210) (0xc003ff8640) Create stream I0821 19:58:45.358323 6 log.go:172] (0xc002ebc210) (0xc003ff8640) Stream added, broadcasting: 1 I0821 19:58:45.361120 6 log.go:172] (0xc002ebc210) Reply frame received for 1 I0821 19:58:45.361146 6 log.go:172] (0xc002ebc210) (0xc0028cd680) Create stream I0821 19:58:45.361153 6 log.go:172] (0xc002ebc210) (0xc0028cd680) Stream added, broadcasting: 3 I0821 19:58:45.362067 6 log.go:172] (0xc002ebc210) Reply frame received for 3 I0821 19:58:45.362093 6 log.go:172] (0xc002ebc210) (0xc0023be1e0) Create stream I0821 19:58:45.362101 6 log.go:172] (0xc002ebc210) (0xc0023be1e0) Stream added, broadcasting: 5 I0821 19:58:45.362862 6 log.go:172] (0xc002ebc210) Reply frame received for 5 I0821 19:58:45.423536 6 log.go:172] (0xc002ebc210) Data frame received for 5 I0821 19:58:45.423555 6 log.go:172] (0xc0023be1e0) (5) Data frame handling I0821 19:58:45.423581 6 log.go:172] (0xc002ebc210) Data frame received for 3 I0821 19:58:45.423612 6 log.go:172] (0xc0028cd680) (3) Data frame handling I0821 19:58:45.423624 6 log.go:172] (0xc0028cd680) (3) Data frame sent I0821 19:58:45.423629 6 log.go:172] (0xc002ebc210) Data frame received for 3 I0821 19:58:45.423633 6 log.go:172] (0xc0028cd680) (3) Data frame handling I0821 19:58:45.424894 6 log.go:172] (0xc002ebc210) Data frame received for 1 I0821 19:58:45.424919 6 log.go:172] (0xc003ff8640) (1) Data frame handling I0821 19:58:45.424945 6 log.go:172] (0xc003ff8640) (1) Data frame sent I0821 19:58:45.424955 6 log.go:172] (0xc002ebc210) (0xc003ff8640) Stream removed, broadcasting: 1 I0821 19:58:45.425056 6 log.go:172] (0xc002ebc210) (0xc003ff8640) Stream removed, broadcasting: 1 I0821 19:58:45.425079 6 log.go:172] (0xc002ebc210) (0xc0028cd680) Stream removed, broadcasting: 3 I0821 19:58:45.425099 6 log.go:172] (0xc002ebc210) Go away received I0821 19:58:45.425139 6 log.go:172] (0xc002ebc210) (0xc0023be1e0) Stream removed, broadcasting: 5 Aug 21 19:58:45.425: INFO: Exec stderr: "" Aug 21 19:58:45.425: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2419 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 19:58:45.425: INFO: >>> kubeConfig: /root/.kube/config I0821 19:58:45.454921 6 log.go:172] (0xc002ebd080) (0xc003ff8960) Create stream I0821 19:58:45.454949 6 log.go:172] (0xc002ebd080) (0xc003ff8960) Stream added, broadcasting: 1 I0821 19:58:45.457964 6 log.go:172] (0xc002ebd080) Reply frame received for 1 I0821 19:58:45.458017 6 log.go:172] (0xc002ebd080) (0xc002225b80) Create stream I0821 19:58:45.458032 6 log.go:172] (0xc002ebd080) (0xc002225b80) Stream added, broadcasting: 3 I0821 19:58:45.458892 6 log.go:172] (0xc002ebd080) Reply frame received for 3 I0821 19:58:45.458929 6 log.go:172] (0xc002ebd080) (0xc0028cd720) Create stream I0821 19:58:45.458943 6 log.go:172] (0xc002ebd080) (0xc0028cd720) Stream added, broadcasting: 5 I0821 19:58:45.459805 6 log.go:172] (0xc002ebd080) Reply frame received for 5 I0821 19:58:45.531825 6 log.go:172] (0xc002ebd080) Data frame received for 5 I0821 19:58:45.531851 6 log.go:172] (0xc0028cd720) (5) Data frame handling I0821 19:58:45.531876 6 log.go:172] (0xc002ebd080) Data frame received for 3 I0821 19:58:45.531957 6 log.go:172] (0xc002225b80) (3) Data frame handling I0821 19:58:45.531986 6 log.go:172] (0xc002225b80) (3) Data frame sent I0821 19:58:45.531999 6 log.go:172] (0xc002ebd080) Data frame received for 3 I0821 19:58:45.532009 6 log.go:172] (0xc002225b80) (3) Data frame handling I0821 19:58:45.533844 6 log.go:172] (0xc002ebd080) Data frame received for 1 I0821 19:58:45.533869 6 log.go:172] (0xc003ff8960) (1) Data frame handling I0821 19:58:45.533890 6 log.go:172] (0xc003ff8960) (1) Data frame sent I0821 19:58:45.533914 6 log.go:172] (0xc002ebd080) (0xc003ff8960) Stream removed, broadcasting: 1 I0821 19:58:45.533942 6 log.go:172] (0xc002ebd080) Go away received I0821 19:58:45.534221 6 log.go:172] (0xc002ebd080) (0xc003ff8960) Stream removed, broadcasting: 1 I0821 19:58:45.534280 6 log.go:172] (0xc002ebd080) (0xc002225b80) Stream removed, broadcasting: 3 I0821 19:58:45.534304 6 log.go:172] (0xc002ebd080) (0xc0028cd720) Stream removed, broadcasting: 5 Aug 21 19:58:45.534: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Aug 21 19:58:45.534: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2419 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 19:58:45.534: INFO: >>> kubeConfig: /root/.kube/config I0821 19:58:45.566539 6 log.go:172] (0xc003fb2370) (0xc002225ea0) Create stream I0821 19:58:45.566571 6 log.go:172] (0xc003fb2370) (0xc002225ea0) Stream added, broadcasting: 1 I0821 19:58:45.569590 6 log.go:172] (0xc003fb2370) Reply frame received for 1 I0821 19:58:45.569638 6 log.go:172] (0xc003fb2370) (0xc003ff8a00) Create stream I0821 19:58:45.569653 6 log.go:172] (0xc003fb2370) (0xc003ff8a00) Stream added, broadcasting: 3 I0821 19:58:45.570781 6 log.go:172] (0xc003fb2370) Reply frame received for 3 I0821 19:58:45.570819 6 log.go:172] (0xc003fb2370) (0xc003ff8aa0) Create stream I0821 19:58:45.570833 6 log.go:172] (0xc003fb2370) (0xc003ff8aa0) Stream added, broadcasting: 5 I0821 19:58:45.571780 6 log.go:172] (0xc003fb2370) Reply frame received for 5 I0821 19:58:45.631242 6 log.go:172] (0xc003fb2370) Data frame received for 3 I0821 19:58:45.631267 6 log.go:172] (0xc003ff8a00) (3) Data frame handling I0821 19:58:45.631298 6 log.go:172] (0xc003fb2370) Data frame received for 5 I0821 19:58:45.631354 6 log.go:172] (0xc003ff8aa0) (5) Data frame handling I0821 19:58:45.631395 6 log.go:172] (0xc003ff8a00) (3) Data frame sent I0821 19:58:45.631418 6 log.go:172] (0xc003fb2370) Data frame received for 3 I0821 19:58:45.631454 6 log.go:172] (0xc003ff8a00) (3) Data frame handling I0821 19:58:45.633706 6 log.go:172] (0xc003fb2370) Data frame received for 1 I0821 19:58:45.633740 6 log.go:172] (0xc002225ea0) (1) Data frame handling I0821 19:58:45.633754 6 log.go:172] (0xc002225ea0) (1) Data frame sent I0821 19:58:45.633773 6 log.go:172] (0xc003fb2370) (0xc002225ea0) Stream removed, broadcasting: 1 I0821 19:58:45.633828 6 log.go:172] (0xc003fb2370) Go away received I0821 19:58:45.633885 6 log.go:172] (0xc003fb2370) (0xc002225ea0) Stream removed, broadcasting: 1 I0821 19:58:45.633910 6 log.go:172] (0xc003fb2370) (0xc003ff8a00) Stream removed, broadcasting: 3 I0821 19:58:45.633933 6 log.go:172] (0xc003fb2370) (0xc003ff8aa0) Stream removed, broadcasting: 5 Aug 21 19:58:45.633: INFO: Exec stderr: "" Aug 21 19:58:45.633: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2419 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 19:58:45.634: INFO: >>> kubeConfig: /root/.kube/config I0821 19:58:45.665987 6 log.go:172] (0xc0026ec790) (0xc003ff8dc0) Create stream I0821 19:58:45.666021 6 log.go:172] (0xc0026ec790) (0xc003ff8dc0) Stream added, broadcasting: 1 I0821 19:58:45.669587 6 log.go:172] (0xc0026ec790) Reply frame received for 1 I0821 19:58:45.669624 6 log.go:172] (0xc0026ec790) (0xc002225f40) Create stream I0821 19:58:45.669642 6 log.go:172] (0xc0026ec790) (0xc002225f40) Stream added, broadcasting: 3 I0821 19:58:45.670558 6 log.go:172] (0xc0026ec790) Reply frame received for 3 I0821 19:58:45.670589 6 log.go:172] (0xc0026ec790) (0xc003ff8e60) Create stream I0821 19:58:45.670599 6 log.go:172] (0xc0026ec790) (0xc003ff8e60) Stream added, broadcasting: 5 I0821 19:58:45.671393 6 log.go:172] (0xc0026ec790) Reply frame received for 5 I0821 19:58:45.745601 6 log.go:172] (0xc0026ec790) Data frame received for 5 I0821 19:58:45.745644 6 log.go:172] (0xc0026ec790) Data frame received for 3 I0821 19:58:45.745680 6 log.go:172] (0xc002225f40) (3) Data frame handling I0821 19:58:45.745699 6 log.go:172] (0xc002225f40) (3) Data frame sent I0821 19:58:45.745714 6 log.go:172] (0xc0026ec790) Data frame received for 3 I0821 19:58:45.745728 6 log.go:172] (0xc002225f40) (3) Data frame handling I0821 19:58:45.745757 6 log.go:172] (0xc003ff8e60) (5) Data frame handling I0821 19:58:45.747180 6 log.go:172] (0xc0026ec790) Data frame received for 1 I0821 19:58:45.747209 6 log.go:172] (0xc003ff8dc0) (1) Data frame handling I0821 19:58:45.747233 6 log.go:172] (0xc003ff8dc0) (1) Data frame sent I0821 19:58:45.747258 6 log.go:172] (0xc0026ec790) (0xc003ff8dc0) Stream removed, broadcasting: 1 I0821 19:58:45.747279 6 log.go:172] (0xc0026ec790) Go away received I0821 19:58:45.747373 6 log.go:172] (0xc0026ec790) (0xc003ff8dc0) Stream removed, broadcasting: 1 I0821 19:58:45.747388 6 log.go:172] (0xc0026ec790) (0xc002225f40) Stream removed, broadcasting: 3 I0821 19:58:45.747397 6 log.go:172] (0xc0026ec790) (0xc003ff8e60) Stream removed, broadcasting: 5 Aug 21 19:58:45.747: INFO: Exec stderr: "" Aug 21 19:58:45.747: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2419 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 19:58:45.747: INFO: >>> kubeConfig: /root/.kube/config I0821 19:58:45.778023 6 log.go:172] (0xc0026ed290) (0xc003ff9180) Create stream I0821 19:58:45.778081 6 log.go:172] (0xc0026ed290) (0xc003ff9180) Stream added, broadcasting: 1 I0821 19:58:45.781177 6 log.go:172] (0xc0026ed290) Reply frame received for 1 I0821 19:58:45.781220 6 log.go:172] (0xc0026ed290) (0xc001fec000) Create stream I0821 19:58:45.781233 6 log.go:172] (0xc0026ed290) (0xc001fec000) Stream added, broadcasting: 3 I0821 19:58:45.782282 6 log.go:172] (0xc0026ed290) Reply frame received for 3 I0821 19:58:45.782317 6 log.go:172] (0xc0026ed290) (0xc0022e1900) Create stream I0821 19:58:45.782329 6 log.go:172] (0xc0026ed290) (0xc0022e1900) Stream added, broadcasting: 5 I0821 19:58:45.783450 6 log.go:172] (0xc0026ed290) Reply frame received for 5 I0821 19:58:45.857443 6 log.go:172] (0xc0026ed290) Data frame received for 5 I0821 19:58:45.857478 6 log.go:172] (0xc0022e1900) (5) Data frame handling I0821 19:58:45.857500 6 log.go:172] (0xc0026ed290) Data frame received for 3 I0821 19:58:45.857512 6 log.go:172] (0xc001fec000) (3) Data frame handling I0821 19:58:45.857522 6 log.go:172] (0xc001fec000) (3) Data frame sent I0821 19:58:45.857568 6 log.go:172] (0xc0026ed290) Data frame received for 3 I0821 19:58:45.857594 6 log.go:172] (0xc001fec000) (3) Data frame handling I0821 19:58:45.859190 6 log.go:172] (0xc0026ed290) Data frame received for 1 I0821 19:58:45.859221 6 log.go:172] (0xc003ff9180) (1) Data frame handling I0821 19:58:45.859238 6 log.go:172] (0xc003ff9180) (1) Data frame sent I0821 19:58:45.859268 6 log.go:172] (0xc0026ed290) (0xc003ff9180) Stream removed, broadcasting: 1 I0821 19:58:45.859291 6 log.go:172] (0xc0026ed290) Go away received I0821 19:58:45.859410 6 log.go:172] (0xc0026ed290) (0xc003ff9180) Stream removed, broadcasting: 1 I0821 19:58:45.859440 6 log.go:172] (0xc0026ed290) (0xc001fec000) Stream removed, broadcasting: 3 I0821 19:58:45.859459 6 log.go:172] (0xc0026ed290) (0xc0022e1900) Stream removed, broadcasting: 5 Aug 21 19:58:45.859: INFO: Exec stderr: "" Aug 21 19:58:45.859: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2419 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Aug 21 19:58:45.859: INFO: >>> kubeConfig: /root/.kube/config I0821 19:58:45.883581 6 log.go:172] (0xc0026edc30) (0xc003ff9540) Create stream I0821 19:58:45.883621 6 log.go:172] (0xc0026edc30) (0xc003ff9540) Stream added, broadcasting: 1 I0821 19:58:45.886299 6 log.go:172] (0xc0026edc30) Reply frame received for 1 I0821 19:58:45.886350 6 log.go:172] (0xc0026edc30) (0xc0028cd860) Create stream I0821 19:58:45.886361 6 log.go:172] (0xc0026edc30) (0xc0028cd860) Stream added, broadcasting: 3 I0821 19:58:45.887286 6 log.go:172] (0xc0026edc30) Reply frame received for 3 I0821 19:58:45.887321 6 log.go:172] (0xc0026edc30) (0xc0022e19a0) Create stream I0821 19:58:45.887332 6 log.go:172] (0xc0026edc30) (0xc0022e19a0) Stream added, broadcasting: 5 I0821 19:58:45.888144 6 log.go:172] (0xc0026edc30) Reply frame received for 5 I0821 19:58:45.955911 6 log.go:172] (0xc0026edc30) Data frame received for 3 I0821 19:58:45.955946 6 log.go:172] (0xc0028cd860) (3) Data frame handling I0821 19:58:45.955968 6 log.go:172] (0xc0028cd860) (3) Data frame sent I0821 19:58:45.955984 6 log.go:172] (0xc0026edc30) Data frame received for 3 I0821 19:58:45.955995 6 log.go:172] (0xc0028cd860) (3) Data frame handling I0821 19:58:45.956030 6 log.go:172] (0xc0026edc30) Data frame received for 5 I0821 19:58:45.956042 6 log.go:172] (0xc0022e19a0) (5) Data frame handling I0821 19:58:45.957785 6 log.go:172] (0xc0026edc30) Data frame received for 1 I0821 19:58:45.957829 6 log.go:172] (0xc003ff9540) (1) Data frame handling I0821 19:58:45.957885 6 log.go:172] (0xc003ff9540) (1) Data frame sent I0821 19:58:45.957911 6 log.go:172] (0xc0026edc30) (0xc003ff9540) Stream removed, broadcasting: 1 I0821 19:58:45.957938 6 log.go:172] (0xc0026edc30) Go away received I0821 19:58:45.958050 6 log.go:172] (0xc0026edc30) (0xc003ff9540) Stream removed, broadcasting: 1 I0821 19:58:45.958076 6 log.go:172] (0xc0026edc30) (0xc0028cd860) Stream removed, broadcasting: 3 I0821 19:58:45.958096 6 log.go:172] (0xc0026edc30) (0xc0022e19a0) Stream removed, broadcasting: 5 Aug 21 19:58:45.958: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:58:45.958: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2419" for this suite. Aug 21 19:59:37.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:59:38.069: INFO: namespace e2e-kubelet-etc-hosts-2419 deletion completed in 52.107113162s • [SLOW TEST:65.468 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:59:38.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 21 19:59:38.135: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d76dc11d-719b-4928-8ee4-9658ca418141" in namespace "projected-5977" to be "success or failure" Aug 21 19:59:38.138: INFO: Pod "downwardapi-volume-d76dc11d-719b-4928-8ee4-9658ca418141": Phase="Pending", Reason="", readiness=false. Elapsed: 3.407603ms Aug 21 19:59:40.142: INFO: Pod "downwardapi-volume-d76dc11d-719b-4928-8ee4-9658ca418141": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007085084s Aug 21 19:59:42.146: INFO: Pod "downwardapi-volume-d76dc11d-719b-4928-8ee4-9658ca418141": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011072692s STEP: Saw pod success Aug 21 19:59:42.146: INFO: Pod "downwardapi-volume-d76dc11d-719b-4928-8ee4-9658ca418141" satisfied condition "success or failure" Aug 21 19:59:42.149: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-d76dc11d-719b-4928-8ee4-9658ca418141 container client-container: STEP: delete the pod Aug 21 19:59:42.239: INFO: Waiting for pod downwardapi-volume-d76dc11d-719b-4928-8ee4-9658ca418141 to disappear Aug 21 19:59:42.264: INFO: Pod downwardapi-volume-d76dc11d-719b-4928-8ee4-9658ca418141 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 19:59:42.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5977" for this suite. Aug 21 19:59:48.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 19:59:48.358: INFO: namespace projected-5977 deletion completed in 6.091098565s • [SLOW TEST:10.288 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 19:59:48.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Aug 21 19:59:56.524: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 21 19:59:56.527: INFO: Pod pod-with-poststart-exec-hook still exists Aug 21 19:59:58.527: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 21 19:59:58.532: INFO: Pod pod-with-poststart-exec-hook still exists Aug 21 20:00:00.527: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 21 20:00:00.531: INFO: Pod pod-with-poststart-exec-hook still exists Aug 21 20:00:02.527: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 21 20:00:02.532: INFO: Pod pod-with-poststart-exec-hook still exists Aug 21 20:00:04.527: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 21 20:00:04.532: INFO: Pod pod-with-poststart-exec-hook still exists Aug 21 20:00:06.527: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 21 20:00:06.533: INFO: Pod pod-with-poststart-exec-hook still exists Aug 21 20:00:08.527: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 21 20:00:08.531: INFO: Pod pod-with-poststart-exec-hook still exists Aug 21 20:00:10.527: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 21 20:00:10.532: INFO: Pod pod-with-poststart-exec-hook still exists Aug 21 20:00:12.527: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 21 20:00:12.532: INFO: Pod pod-with-poststart-exec-hook still exists Aug 21 20:00:14.527: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 21 20:00:14.531: INFO: Pod pod-with-poststart-exec-hook still exists Aug 21 20:00:16.527: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Aug 21 20:00:16.532: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:00:16.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-1438" for this suite. Aug 21 20:00:38.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:00:38.644: INFO: namespace container-lifecycle-hook-1438 deletion completed in 22.107946957s • [SLOW TEST:50.286 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:00:38.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:00:42.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1655" for this suite. Aug 21 20:01:44.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:01:44.823: INFO: namespace kubelet-test-1655 deletion completed in 1m2.079608954s • [SLOW TEST:66.179 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:01:44.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Aug 21 20:01:44.885: INFO: Waiting up to 5m0s for pod "client-containers-4b4cce98-eed6-455b-9ab9-e5f1c5712ce8" in namespace "containers-8152" to be "success or failure" Aug 21 20:01:44.889: INFO: Pod "client-containers-4b4cce98-eed6-455b-9ab9-e5f1c5712ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.51699ms Aug 21 20:01:46.893: INFO: Pod "client-containers-4b4cce98-eed6-455b-9ab9-e5f1c5712ce8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00730538s Aug 21 20:01:48.896: INFO: Pod "client-containers-4b4cce98-eed6-455b-9ab9-e5f1c5712ce8": Phase="Running", Reason="", readiness=true. Elapsed: 4.010362728s Aug 21 20:01:50.899: INFO: Pod "client-containers-4b4cce98-eed6-455b-9ab9-e5f1c5712ce8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.013925014s STEP: Saw pod success Aug 21 20:01:50.899: INFO: Pod "client-containers-4b4cce98-eed6-455b-9ab9-e5f1c5712ce8" satisfied condition "success or failure" Aug 21 20:01:50.902: INFO: Trying to get logs from node iruya-worker2 pod client-containers-4b4cce98-eed6-455b-9ab9-e5f1c5712ce8 container test-container: STEP: delete the pod Aug 21 20:01:50.985: INFO: Waiting for pod client-containers-4b4cce98-eed6-455b-9ab9-e5f1c5712ce8 to disappear Aug 21 20:01:50.989: INFO: Pod client-containers-4b4cce98-eed6-455b-9ab9-e5f1c5712ce8 no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:01:50.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8152" for this suite. Aug 21 20:01:57.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:01:57.064: INFO: namespace containers-8152 deletion completed in 6.070819498s • [SLOW TEST:12.240 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:01:57.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0821 20:02:37.849560 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Aug 21 20:02:37.849: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:02:37.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-376" for this suite. Aug 21 20:02:45.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:02:45.951: INFO: namespace gc-376 deletion completed in 8.097872258s • [SLOW TEST:48.887 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:02:45.951: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5946 [It] should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Aug 21 20:02:47.098: INFO: Found 0 stateful pods, waiting for 3 Aug 21 20:02:57.107: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 20:02:57.107: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 20:02:57.107: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Aug 21 20:03:07.101: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Aug 21 20:03:07.101: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Aug 21 20:03:07.101: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Aug 21 20:03:07.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5946 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 21 20:03:12.123: INFO: stderr: "I0821 20:03:12.011671 2235 log.go:172] (0xc00010cf20) (0xc0005948c0) Create stream\nI0821 20:03:12.011706 2235 log.go:172] (0xc00010cf20) (0xc0005948c0) Stream added, broadcasting: 1\nI0821 20:03:12.013533 2235 log.go:172] (0xc00010cf20) Reply frame received for 1\nI0821 20:03:12.013568 2235 log.go:172] (0xc00010cf20) (0xc0006a6000) Create stream\nI0821 20:03:12.013577 2235 log.go:172] (0xc00010cf20) (0xc0006a6000) Stream added, broadcasting: 3\nI0821 20:03:12.014208 2235 log.go:172] (0xc00010cf20) Reply frame received for 3\nI0821 20:03:12.014237 2235 log.go:172] (0xc00010cf20) (0xc0006a60a0) Create stream\nI0821 20:03:12.014249 2235 log.go:172] (0xc00010cf20) (0xc0006a60a0) Stream added, broadcasting: 5\nI0821 20:03:12.014896 2235 log.go:172] (0xc00010cf20) Reply frame received for 5\nI0821 20:03:12.078421 2235 log.go:172] (0xc00010cf20) Data frame received for 5\nI0821 20:03:12.078452 2235 log.go:172] (0xc0006a60a0) (5) Data frame handling\nI0821 20:03:12.078479 2235 log.go:172] (0xc0006a60a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0821 20:03:12.115141 2235 log.go:172] (0xc00010cf20) Data frame received for 5\nI0821 20:03:12.115159 2235 log.go:172] (0xc0006a60a0) (5) Data frame handling\nI0821 20:03:12.115172 2235 log.go:172] (0xc00010cf20) Data frame received for 3\nI0821 20:03:12.115176 2235 log.go:172] (0xc0006a6000) (3) Data frame handling\nI0821 20:03:12.115180 2235 log.go:172] (0xc0006a6000) (3) Data frame sent\nI0821 20:03:12.115184 2235 log.go:172] (0xc00010cf20) Data frame received for 3\nI0821 20:03:12.115187 2235 log.go:172] (0xc0006a6000) (3) Data frame handling\nI0821 20:03:12.116943 2235 log.go:172] (0xc00010cf20) Data frame received for 1\nI0821 20:03:12.116968 2235 log.go:172] (0xc0005948c0) (1) Data frame handling\nI0821 20:03:12.116981 2235 log.go:172] (0xc0005948c0) (1) Data frame sent\nI0821 20:03:12.116996 2235 log.go:172] (0xc00010cf20) (0xc0005948c0) Stream removed, broadcasting: 1\nI0821 20:03:12.117013 2235 log.go:172] (0xc00010cf20) Go away received\nI0821 20:03:12.117403 2235 log.go:172] (0xc00010cf20) (0xc0005948c0) Stream removed, broadcasting: 1\nI0821 20:03:12.117424 2235 log.go:172] (0xc00010cf20) (0xc0006a6000) Stream removed, broadcasting: 3\nI0821 20:03:12.117434 2235 log.go:172] (0xc00010cf20) (0xc0006a60a0) Stream removed, broadcasting: 5\n" Aug 21 20:03:12.123: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 21 20:03:12.123: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Aug 21 20:03:22.157: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Aug 21 20:03:32.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5946 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 21 20:03:32.501: INFO: stderr: "I0821 20:03:32.413372 2267 log.go:172] (0xc000616420) (0xc00053e6e0) Create stream\nI0821 20:03:32.413433 2267 log.go:172] (0xc000616420) (0xc00053e6e0) Stream added, broadcasting: 1\nI0821 20:03:32.415398 2267 log.go:172] (0xc000616420) Reply frame received for 1\nI0821 20:03:32.415430 2267 log.go:172] (0xc000616420) (0xc0007fc000) Create stream\nI0821 20:03:32.415441 2267 log.go:172] (0xc000616420) (0xc0007fc000) Stream added, broadcasting: 3\nI0821 20:03:32.416246 2267 log.go:172] (0xc000616420) Reply frame received for 3\nI0821 20:03:32.416278 2267 log.go:172] (0xc000616420) (0xc00053e780) Create stream\nI0821 20:03:32.416297 2267 log.go:172] (0xc000616420) (0xc00053e780) Stream added, broadcasting: 5\nI0821 20:03:32.417165 2267 log.go:172] (0xc000616420) Reply frame received for 5\nI0821 20:03:32.489265 2267 log.go:172] (0xc000616420) Data frame received for 3\nI0821 20:03:32.489368 2267 log.go:172] (0xc0007fc000) (3) Data frame handling\nI0821 20:03:32.489416 2267 log.go:172] (0xc0007fc000) (3) Data frame sent\nI0821 20:03:32.489435 2267 log.go:172] (0xc000616420) Data frame received for 3\nI0821 20:03:32.489443 2267 log.go:172] (0xc0007fc000) (3) Data frame handling\nI0821 20:03:32.489461 2267 log.go:172] (0xc000616420) Data frame received for 5\nI0821 20:03:32.489465 2267 log.go:172] (0xc00053e780) (5) Data frame handling\nI0821 20:03:32.489471 2267 log.go:172] (0xc00053e780) (5) Data frame sent\nI0821 20:03:32.489476 2267 log.go:172] (0xc000616420) Data frame received for 5\nI0821 20:03:32.489483 2267 log.go:172] (0xc00053e780) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0821 20:03:32.491138 2267 log.go:172] (0xc000616420) Data frame received for 1\nI0821 20:03:32.491157 2267 log.go:172] (0xc00053e6e0) (1) Data frame handling\nI0821 20:03:32.491167 2267 log.go:172] (0xc00053e6e0) (1) Data frame sent\nI0821 20:03:32.491176 2267 log.go:172] (0xc000616420) (0xc00053e6e0) Stream removed, broadcasting: 1\nI0821 20:03:32.491463 2267 log.go:172] (0xc000616420) (0xc00053e6e0) Stream removed, broadcasting: 1\nI0821 20:03:32.491474 2267 log.go:172] (0xc000616420) (0xc0007fc000) Stream removed, broadcasting: 3\nI0821 20:03:32.491555 2267 log.go:172] (0xc000616420) Go away received\nI0821 20:03:32.491591 2267 log.go:172] (0xc000616420) (0xc00053e780) Stream removed, broadcasting: 5\n" Aug 21 20:03:32.501: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 21 20:03:32.501: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 21 20:04:02.539: INFO: Waiting for StatefulSet statefulset-5946/ss2 to complete update Aug 21 20:04:02.539: INFO: Waiting for Pod statefulset-5946/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Aug 21 20:04:12.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5946 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Aug 21 20:04:12.917: INFO: stderr: "I0821 20:04:12.783282 2288 log.go:172] (0xc0009d20b0) (0xc00089c640) Create stream\nI0821 20:04:12.783344 2288 log.go:172] (0xc0009d20b0) (0xc00089c640) Stream added, broadcasting: 1\nI0821 20:04:12.785678 2288 log.go:172] (0xc0009d20b0) Reply frame received for 1\nI0821 20:04:12.785712 2288 log.go:172] (0xc0009d20b0) (0xc0008b0000) Create stream\nI0821 20:04:12.785724 2288 log.go:172] (0xc0009d20b0) (0xc0008b0000) Stream added, broadcasting: 3\nI0821 20:04:12.786419 2288 log.go:172] (0xc0009d20b0) Reply frame received for 3\nI0821 20:04:12.786442 2288 log.go:172] (0xc0009d20b0) (0xc0002b0320) Create stream\nI0821 20:04:12.786450 2288 log.go:172] (0xc0009d20b0) (0xc0002b0320) Stream added, broadcasting: 5\nI0821 20:04:12.787187 2288 log.go:172] (0xc0009d20b0) Reply frame received for 5\nI0821 20:04:12.851662 2288 log.go:172] (0xc0009d20b0) Data frame received for 5\nI0821 20:04:12.851691 2288 log.go:172] (0xc0002b0320) (5) Data frame handling\nI0821 20:04:12.851713 2288 log.go:172] (0xc0002b0320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0821 20:04:12.909433 2288 log.go:172] (0xc0009d20b0) Data frame received for 3\nI0821 20:04:12.909473 2288 log.go:172] (0xc0008b0000) (3) Data frame handling\nI0821 20:04:12.909502 2288 log.go:172] (0xc0008b0000) (3) Data frame sent\nI0821 20:04:12.909516 2288 log.go:172] (0xc0009d20b0) Data frame received for 3\nI0821 20:04:12.909528 2288 log.go:172] (0xc0008b0000) (3) Data frame handling\nI0821 20:04:12.909623 2288 log.go:172] (0xc0009d20b0) Data frame received for 5\nI0821 20:04:12.909637 2288 log.go:172] (0xc0002b0320) (5) Data frame handling\nI0821 20:04:12.911123 2288 log.go:172] (0xc0009d20b0) Data frame received for 1\nI0821 20:04:12.911141 2288 log.go:172] (0xc00089c640) (1) Data frame handling\nI0821 20:04:12.911149 2288 log.go:172] (0xc00089c640) (1) Data frame sent\nI0821 20:04:12.911160 2288 log.go:172] (0xc0009d20b0) (0xc00089c640) Stream removed, broadcasting: 1\nI0821 20:04:12.911176 2288 log.go:172] (0xc0009d20b0) Go away received\nI0821 20:04:12.911510 2288 log.go:172] (0xc0009d20b0) (0xc00089c640) Stream removed, broadcasting: 1\nI0821 20:04:12.911529 2288 log.go:172] (0xc0009d20b0) (0xc0008b0000) Stream removed, broadcasting: 3\nI0821 20:04:12.911535 2288 log.go:172] (0xc0009d20b0) (0xc0002b0320) Stream removed, broadcasting: 5\n" Aug 21 20:04:12.917: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Aug 21 20:04:12.917: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Aug 21 20:04:23.233: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Aug 21 20:04:33.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5946 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Aug 21 20:04:33.821: INFO: stderr: "I0821 20:04:33.748622 2307 log.go:172] (0xc000aae370) (0xc000854640) Create stream\nI0821 20:04:33.748693 2307 log.go:172] (0xc000aae370) (0xc000854640) Stream added, broadcasting: 1\nI0821 20:04:33.751091 2307 log.go:172] (0xc000aae370) Reply frame received for 1\nI0821 20:04:33.751137 2307 log.go:172] (0xc000aae370) (0xc000ace000) Create stream\nI0821 20:04:33.751150 2307 log.go:172] (0xc000aae370) (0xc000ace000) Stream added, broadcasting: 3\nI0821 20:04:33.751896 2307 log.go:172] (0xc000aae370) Reply frame received for 3\nI0821 20:04:33.751931 2307 log.go:172] (0xc000aae370) (0xc0005ec280) Create stream\nI0821 20:04:33.751947 2307 log.go:172] (0xc000aae370) (0xc0005ec280) Stream added, broadcasting: 5\nI0821 20:04:33.752696 2307 log.go:172] (0xc000aae370) Reply frame received for 5\nI0821 20:04:33.813204 2307 log.go:172] (0xc000aae370) Data frame received for 3\nI0821 20:04:33.813226 2307 log.go:172] (0xc000ace000) (3) Data frame handling\nI0821 20:04:33.813233 2307 log.go:172] (0xc000ace000) (3) Data frame sent\nI0821 20:04:33.813239 2307 log.go:172] (0xc000aae370) Data frame received for 3\nI0821 20:04:33.813244 2307 log.go:172] (0xc000ace000) (3) Data frame handling\nI0821 20:04:33.813251 2307 log.go:172] (0xc000aae370) Data frame received for 5\nI0821 20:04:33.813255 2307 log.go:172] (0xc0005ec280) (5) Data frame handling\nI0821 20:04:33.813260 2307 log.go:172] (0xc0005ec280) (5) Data frame sent\nI0821 20:04:33.813264 2307 log.go:172] (0xc000aae370) Data frame received for 5\nI0821 20:04:33.813268 2307 log.go:172] (0xc0005ec280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0821 20:04:33.814689 2307 log.go:172] (0xc000aae370) Data frame received for 1\nI0821 20:04:33.814714 2307 log.go:172] (0xc000854640) (1) Data frame handling\nI0821 20:04:33.814724 2307 log.go:172] (0xc000854640) (1) Data frame sent\nI0821 20:04:33.814744 2307 log.go:172] (0xc000aae370) (0xc000854640) Stream removed, broadcasting: 1\nI0821 20:04:33.814757 2307 log.go:172] (0xc000aae370) Go away received\nI0821 20:04:33.815085 2307 log.go:172] (0xc000aae370) (0xc000854640) Stream removed, broadcasting: 1\nI0821 20:04:33.815107 2307 log.go:172] (0xc000aae370) (0xc000ace000) Stream removed, broadcasting: 3\nI0821 20:04:33.815114 2307 log.go:172] (0xc000aae370) (0xc0005ec280) Stream removed, broadcasting: 5\n" Aug 21 20:04:33.821: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Aug 21 20:04:33.821: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Aug 21 20:04:44.769: INFO: Waiting for StatefulSet statefulset-5946/ss2 to complete update Aug 21 20:04:44.769: INFO: Waiting for Pod statefulset-5946/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 21 20:04:44.769: INFO: Waiting for Pod statefulset-5946/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 21 20:04:54.776: INFO: Waiting for StatefulSet statefulset-5946/ss2 to complete update Aug 21 20:04:54.776: INFO: Waiting for Pod statefulset-5946/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Aug 21 20:05:04.777: INFO: Waiting for StatefulSet statefulset-5946/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Aug 21 20:05:14.775: INFO: Deleting all statefulset in ns statefulset-5946 Aug 21 20:05:14.777: INFO: Scaling statefulset ss2 to 0 Aug 21 20:05:44.861: INFO: Waiting for statefulset status.replicas updated to 0 Aug 21 20:05:44.864: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:05:44.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5946" for this suite. Aug 21 20:05:52.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:05:52.988: INFO: namespace statefulset-5946 deletion completed in 8.104508841s • [SLOW TEST:187.037 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:05:52.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 20:05:53.072: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:05:57.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3754" for this suite. Aug 21 20:06:47.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:06:47.239: INFO: namespace pods-3754 deletion completed in 50.124820396s • [SLOW TEST:54.251 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:06:47.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-08be0376-2013-480d-b138-f61a8c9e1416 STEP: Creating a pod to test consume configMaps Aug 21 20:06:47.308: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e607cd59-1a6e-4623-8777-10877b450182" in namespace "projected-3905" to be "success or failure" Aug 21 20:06:47.327: INFO: Pod "pod-projected-configmaps-e607cd59-1a6e-4623-8777-10877b450182": Phase="Pending", Reason="", readiness=false. Elapsed: 18.357268ms Aug 21 20:06:49.461: INFO: Pod "pod-projected-configmaps-e607cd59-1a6e-4623-8777-10877b450182": Phase="Pending", Reason="", readiness=false. Elapsed: 2.153150013s Aug 21 20:06:51.465: INFO: Pod "pod-projected-configmaps-e607cd59-1a6e-4623-8777-10877b450182": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.1570119s STEP: Saw pod success Aug 21 20:06:51.465: INFO: Pod "pod-projected-configmaps-e607cd59-1a6e-4623-8777-10877b450182" satisfied condition "success or failure" Aug 21 20:06:51.468: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-e607cd59-1a6e-4623-8777-10877b450182 container projected-configmap-volume-test: STEP: delete the pod Aug 21 20:06:51.494: INFO: Waiting for pod pod-projected-configmaps-e607cd59-1a6e-4623-8777-10877b450182 to disappear Aug 21 20:06:51.510: INFO: Pod pod-projected-configmaps-e607cd59-1a6e-4623-8777-10877b450182 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:06:51.510: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3905" for this suite. Aug 21 20:06:57.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:06:57.645: INFO: namespace projected-3905 deletion completed in 6.131289629s • [SLOW TEST:10.405 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:06:57.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e36c653e-110c-4e87-b8cc-ce728fe858de STEP: Creating a pod to test consume secrets Aug 21 20:06:57.758: INFO: Waiting up to 5m0s for pod "pod-secrets-4674e0a7-3786-43da-bc92-3db1b2da5bc6" in namespace "secrets-5711" to be "success or failure" Aug 21 20:06:57.774: INFO: Pod "pod-secrets-4674e0a7-3786-43da-bc92-3db1b2da5bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 15.491982ms Aug 21 20:06:59.880: INFO: Pod "pod-secrets-4674e0a7-3786-43da-bc92-3db1b2da5bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122036679s Aug 21 20:07:01.885: INFO: Pod "pod-secrets-4674e0a7-3786-43da-bc92-3db1b2da5bc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126247502s Aug 21 20:07:03.934: INFO: Pod "pod-secrets-4674e0a7-3786-43da-bc92-3db1b2da5bc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.175798312s STEP: Saw pod success Aug 21 20:07:03.934: INFO: Pod "pod-secrets-4674e0a7-3786-43da-bc92-3db1b2da5bc6" satisfied condition "success or failure" Aug 21 20:07:03.937: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-4674e0a7-3786-43da-bc92-3db1b2da5bc6 container secret-volume-test: STEP: delete the pod Aug 21 20:07:04.137: INFO: Waiting for pod pod-secrets-4674e0a7-3786-43da-bc92-3db1b2da5bc6 to disappear Aug 21 20:07:04.170: INFO: Pod pod-secrets-4674e0a7-3786-43da-bc92-3db1b2da5bc6 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:07:04.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5711" for this suite. Aug 21 20:07:10.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:07:10.400: INFO: namespace secrets-5711 deletion completed in 6.226594258s • [SLOW TEST:12.755 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:07:10.401: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-5e6b1302-1a09-4cc2-8e0c-64aebbaaef9d STEP: Creating secret with name secret-projected-all-test-volume-35d2adf1-b613-4e67-83a3-b29696f9938e STEP: Creating a pod to test Check all projections for projected volume plugin Aug 21 20:07:10.457: INFO: Waiting up to 5m0s for pod "projected-volume-ca0f174c-21c4-448c-bfc4-7c887cf519a1" in namespace "projected-5693" to be "success or failure" Aug 21 20:07:10.461: INFO: Pod "projected-volume-ca0f174c-21c4-448c-bfc4-7c887cf519a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.143399ms Aug 21 20:07:12.464: INFO: Pod "projected-volume-ca0f174c-21c4-448c-bfc4-7c887cf519a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007458227s Aug 21 20:07:14.677: INFO: Pod "projected-volume-ca0f174c-21c4-448c-bfc4-7c887cf519a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220536599s Aug 21 20:07:16.681: INFO: Pod "projected-volume-ca0f174c-21c4-448c-bfc4-7c887cf519a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.224193197s STEP: Saw pod success Aug 21 20:07:16.681: INFO: Pod "projected-volume-ca0f174c-21c4-448c-bfc4-7c887cf519a1" satisfied condition "success or failure" Aug 21 20:07:16.684: INFO: Trying to get logs from node iruya-worker pod projected-volume-ca0f174c-21c4-448c-bfc4-7c887cf519a1 container projected-all-volume-test: STEP: delete the pod Aug 21 20:07:16.847: INFO: Waiting for pod projected-volume-ca0f174c-21c4-448c-bfc4-7c887cf519a1 to disappear Aug 21 20:07:16.862: INFO: Pod projected-volume-ca0f174c-21c4-448c-bfc4-7c887cf519a1 no longer exists [AfterEach] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:07:16.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5693" for this suite. Aug 21 20:07:22.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:07:22.983: INFO: namespace projected-5693 deletion completed in 6.116626457s • [SLOW TEST:12.582 seconds] [sig-storage] Projected combined /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:07:22.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:07:51.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5895" for this suite. Aug 21 20:07:57.270: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:07:57.340: INFO: namespace namespaces-5895 deletion completed in 6.082134847s STEP: Destroying namespace "nsdeletetest-7466" for this suite. Aug 21 20:07:57.342: INFO: Namespace nsdeletetest-7466 was already deleted STEP: Destroying namespace "nsdeletetest-9766" for this suite. Aug 21 20:08:03.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:08:03.466: INFO: namespace nsdeletetest-9766 deletion completed in 6.124005662s • [SLOW TEST:40.483 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:08:03.466: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 20:08:07.594: INFO: Waiting up to 5m0s for pod "client-envvars-dfff20de-373a-460d-a229-c7e20e8c1b7b" in namespace "pods-3685" to be "success or failure" Aug 21 20:08:07.606: INFO: Pod "client-envvars-dfff20de-373a-460d-a229-c7e20e8c1b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.466429ms Aug 21 20:08:09.610: INFO: Pod "client-envvars-dfff20de-373a-460d-a229-c7e20e8c1b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015778254s Aug 21 20:08:11.613: INFO: Pod "client-envvars-dfff20de-373a-460d-a229-c7e20e8c1b7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019289399s STEP: Saw pod success Aug 21 20:08:11.613: INFO: Pod "client-envvars-dfff20de-373a-460d-a229-c7e20e8c1b7b" satisfied condition "success or failure" Aug 21 20:08:11.616: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-dfff20de-373a-460d-a229-c7e20e8c1b7b container env3cont: STEP: delete the pod Aug 21 20:08:11.644: INFO: Waiting for pod client-envvars-dfff20de-373a-460d-a229-c7e20e8c1b7b to disappear Aug 21 20:08:11.647: INFO: Pod client-envvars-dfff20de-373a-460d-a229-c7e20e8c1b7b no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:08:11.647: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3685" for this suite. Aug 21 20:08:51.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:08:51.743: INFO: namespace pods-3685 deletion completed in 40.092042024s • [SLOW TEST:48.277 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:08:51.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 21 20:08:51.873: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4a190aa7-7df8-4229-bca8-b3bae15f2645" in namespace "downward-api-7642" to be "success or failure" Aug 21 20:08:51.882: INFO: Pod "downwardapi-volume-4a190aa7-7df8-4229-bca8-b3bae15f2645": Phase="Pending", Reason="", readiness=false. Elapsed: 8.692576ms Aug 21 20:08:53.886: INFO: Pod "downwardapi-volume-4a190aa7-7df8-4229-bca8-b3bae15f2645": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012634738s Aug 21 20:08:55.889: INFO: Pod "downwardapi-volume-4a190aa7-7df8-4229-bca8-b3bae15f2645": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015875225s STEP: Saw pod success Aug 21 20:08:55.889: INFO: Pod "downwardapi-volume-4a190aa7-7df8-4229-bca8-b3bae15f2645" satisfied condition "success or failure" Aug 21 20:08:55.891: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4a190aa7-7df8-4229-bca8-b3bae15f2645 container client-container: STEP: delete the pod Aug 21 20:08:55.985: INFO: Waiting for pod downwardapi-volume-4a190aa7-7df8-4229-bca8-b3bae15f2645 to disappear Aug 21 20:08:55.995: INFO: Pod downwardapi-volume-4a190aa7-7df8-4229-bca8-b3bae15f2645 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:08:55.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7642" for this suite. Aug 21 20:09:02.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:09:02.085: INFO: namespace downward-api-7642 deletion completed in 6.087448706s • [SLOW TEST:10.342 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:09:02.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Aug 21 20:09:02.200: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e3c2782b-55a5-469e-9b7a-5cf093332499" in namespace "projected-5673" to be "success or failure" Aug 21 20:09:02.267: INFO: Pod "downwardapi-volume-e3c2782b-55a5-469e-9b7a-5cf093332499": Phase="Pending", Reason="", readiness=false. Elapsed: 67.460659ms Aug 21 20:09:04.524: INFO: Pod "downwardapi-volume-e3c2782b-55a5-469e-9b7a-5cf093332499": Phase="Pending", Reason="", readiness=false. Elapsed: 2.324492065s Aug 21 20:09:06.535: INFO: Pod "downwardapi-volume-e3c2782b-55a5-469e-9b7a-5cf093332499": Phase="Running", Reason="", readiness=true. Elapsed: 4.335133935s Aug 21 20:09:08.539: INFO: Pod "downwardapi-volume-e3c2782b-55a5-469e-9b7a-5cf093332499": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.33937757s STEP: Saw pod success Aug 21 20:09:08.539: INFO: Pod "downwardapi-volume-e3c2782b-55a5-469e-9b7a-5cf093332499" satisfied condition "success or failure" Aug 21 20:09:08.544: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-e3c2782b-55a5-469e-9b7a-5cf093332499 container client-container: STEP: delete the pod Aug 21 20:09:08.582: INFO: Waiting for pod downwardapi-volume-e3c2782b-55a5-469e-9b7a-5cf093332499 to disappear Aug 21 20:09:08.636: INFO: Pod downwardapi-volume-e3c2782b-55a5-469e-9b7a-5cf093332499 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:09:08.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5673" for this suite. Aug 21 20:09:14.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:09:14.730: INFO: namespace projected-5673 deletion completed in 6.090594824s • [SLOW TEST:12.644 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:09:14.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Aug 21 20:09:14.784: INFO: Waiting up to 5m0s for pod "pod-982c935c-6804-46cf-83ec-9c3b9d35cbfd" in namespace "emptydir-5778" to be "success or failure" Aug 21 20:09:14.789: INFO: Pod "pod-982c935c-6804-46cf-83ec-9c3b9d35cbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040008ms Aug 21 20:09:16.799: INFO: Pod "pod-982c935c-6804-46cf-83ec-9c3b9d35cbfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014049761s Aug 21 20:09:18.802: INFO: Pod "pod-982c935c-6804-46cf-83ec-9c3b9d35cbfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017348952s STEP: Saw pod success Aug 21 20:09:18.802: INFO: Pod "pod-982c935c-6804-46cf-83ec-9c3b9d35cbfd" satisfied condition "success or failure" Aug 21 20:09:18.804: INFO: Trying to get logs from node iruya-worker pod pod-982c935c-6804-46cf-83ec-9c3b9d35cbfd container test-container: STEP: delete the pod Aug 21 20:09:18.946: INFO: Waiting for pod pod-982c935c-6804-46cf-83ec-9c3b9d35cbfd to disappear Aug 21 20:09:18.962: INFO: Pod pod-982c935c-6804-46cf-83ec-9c3b9d35cbfd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:09:18.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5778" for this suite. Aug 21 20:09:25.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:09:25.157: INFO: namespace emptydir-5778 deletion completed in 6.191619242s • [SLOW TEST:10.427 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:09:25.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Aug 21 20:09:25.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1521' Aug 21 20:09:25.504: INFO: stderr: "" Aug 21 20:09:25.504: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Aug 21 20:09:26.510: INFO: Selector matched 1 pods for map[app:redis] Aug 21 20:09:26.510: INFO: Found 0 / 1 Aug 21 20:09:27.509: INFO: Selector matched 1 pods for map[app:redis] Aug 21 20:09:27.509: INFO: Found 0 / 1 Aug 21 20:09:28.541: INFO: Selector matched 1 pods for map[app:redis] Aug 21 20:09:28.541: INFO: Found 1 / 1 Aug 21 20:09:28.541: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Aug 21 20:09:28.544: INFO: Selector matched 1 pods for map[app:redis] Aug 21 20:09:28.544: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Aug 21 20:09:28.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s4qcc redis-master --namespace=kubectl-1521' Aug 21 20:09:28.639: INFO: stderr: "" Aug 21 20:09:28.639: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 21 Aug 20:09:28.160 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Aug 20:09:28.160 # Server started, Redis version 3.2.12\n1:M 21 Aug 20:09:28.160 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Aug 20:09:28.160 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Aug 21 20:09:28.639: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s4qcc redis-master --namespace=kubectl-1521 --tail=1' Aug 21 20:09:28.743: INFO: stderr: "" Aug 21 20:09:28.743: INFO: stdout: "1:M 21 Aug 20:09:28.160 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Aug 21 20:09:28.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s4qcc redis-master --namespace=kubectl-1521 --limit-bytes=1' Aug 21 20:09:28.853: INFO: stderr: "" Aug 21 20:09:28.853: INFO: stdout: " " STEP: exposing timestamps Aug 21 20:09:28.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s4qcc redis-master --namespace=kubectl-1521 --tail=1 --timestamps' Aug 21 20:09:28.953: INFO: stderr: "" Aug 21 20:09:28.953: INFO: stdout: "2020-08-21T20:09:28.160540051Z 1:M 21 Aug 20:09:28.160 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Aug 21 20:09:31.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s4qcc redis-master --namespace=kubectl-1521 --since=1s' Aug 21 20:09:31.553: INFO: stderr: "" Aug 21 20:09:31.553: INFO: stdout: "" Aug 21 20:09:31.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-s4qcc redis-master --namespace=kubectl-1521 --since=24h' Aug 21 20:09:31.648: INFO: stderr: "" Aug 21 20:09:31.648: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 21 Aug 20:09:28.160 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 21 Aug 20:09:28.160 # Server started, Redis version 3.2.12\n1:M 21 Aug 20:09:28.160 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 21 Aug 20:09:28.160 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Aug 21 20:09:31.648: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1521' Aug 21 20:09:31.755: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Aug 21 20:09:31.755: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Aug 21 20:09:31.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-1521' Aug 21 20:09:31.852: INFO: stderr: "No resources found.\n" Aug 21 20:09:31.852: INFO: stdout: "" Aug 21 20:09:31.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-1521 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Aug 21 20:09:31.977: INFO: stderr: "" Aug 21 20:09:31.977: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:09:31.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1521" for this suite. Aug 21 20:09:38.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:09:38.112: INFO: namespace kubectl-1521 deletion completed in 6.130761753s • [SLOW TEST:12.954 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:09:38.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Aug 21 20:09:38.302: INFO: Waiting up to 5m0s for pod "pod-a8733890-6331-4505-a749-6c6c3b7e35e2" in namespace "emptydir-4201" to be "success or failure" Aug 21 20:09:38.323: INFO: Pod "pod-a8733890-6331-4505-a749-6c6c3b7e35e2": Phase="Pending", Reason="", readiness=false. Elapsed: 20.293194ms Aug 21 20:09:40.380: INFO: Pod "pod-a8733890-6331-4505-a749-6c6c3b7e35e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077597735s Aug 21 20:09:42.384: INFO: Pod "pod-a8733890-6331-4505-a749-6c6c3b7e35e2": Phase="Running", Reason="", readiness=true. Elapsed: 4.081448964s Aug 21 20:09:44.387: INFO: Pod "pod-a8733890-6331-4505-a749-6c6c3b7e35e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.08442187s STEP: Saw pod success Aug 21 20:09:44.387: INFO: Pod "pod-a8733890-6331-4505-a749-6c6c3b7e35e2" satisfied condition "success or failure" Aug 21 20:09:44.389: INFO: Trying to get logs from node iruya-worker pod pod-a8733890-6331-4505-a749-6c6c3b7e35e2 container test-container: STEP: delete the pod Aug 21 20:09:44.430: INFO: Waiting for pod pod-a8733890-6331-4505-a749-6c6c3b7e35e2 to disappear Aug 21 20:09:44.613: INFO: Pod pod-a8733890-6331-4505-a749-6c6c3b7e35e2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:09:44.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4201" for this suite. Aug 21 20:09:50.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:09:50.744: INFO: namespace emptydir-4201 deletion completed in 6.127668953s • [SLOW TEST:12.632 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:09:50.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Aug 21 20:09:50.818: INFO: Waiting up to 5m0s for pod "var-expansion-3109d855-32a6-4b52-ac62-9bc141184d6c" in namespace "var-expansion-8552" to be "success or failure" Aug 21 20:09:50.845: INFO: Pod "var-expansion-3109d855-32a6-4b52-ac62-9bc141184d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.958685ms Aug 21 20:09:52.895: INFO: Pod "var-expansion-3109d855-32a6-4b52-ac62-9bc141184d6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076926242s Aug 21 20:09:54.902: INFO: Pod "var-expansion-3109d855-32a6-4b52-ac62-9bc141184d6c": Phase="Running", Reason="", readiness=true. Elapsed: 4.08383245s Aug 21 20:09:56.905: INFO: Pod "var-expansion-3109d855-32a6-4b52-ac62-9bc141184d6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087504734s STEP: Saw pod success Aug 21 20:09:56.905: INFO: Pod "var-expansion-3109d855-32a6-4b52-ac62-9bc141184d6c" satisfied condition "success or failure" Aug 21 20:09:56.908: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-3109d855-32a6-4b52-ac62-9bc141184d6c container dapi-container: STEP: delete the pod Aug 21 20:09:56.950: INFO: Waiting for pod var-expansion-3109d855-32a6-4b52-ac62-9bc141184d6c to disappear Aug 21 20:09:56.970: INFO: Pod var-expansion-3109d855-32a6-4b52-ac62-9bc141184d6c no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:09:56.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8552" for this suite. Aug 21 20:10:02.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:10:03.094: INFO: namespace var-expansion-8552 deletion completed in 6.119741568s • [SLOW TEST:12.350 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:10:03.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-dfc01e19-d0c8-49a9-b50e-5d72e732bdbf STEP: Creating a pod to test consume secrets Aug 21 20:10:03.204: INFO: Waiting up to 5m0s for pod "pod-secrets-0567b307-a222-4044-9d60-20413d37d120" in namespace "secrets-5689" to be "success or failure" Aug 21 20:10:03.208: INFO: Pod "pod-secrets-0567b307-a222-4044-9d60-20413d37d120": Phase="Pending", Reason="", readiness=false. Elapsed: 4.23332ms Aug 21 20:10:05.375: INFO: Pod "pod-secrets-0567b307-a222-4044-9d60-20413d37d120": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170695073s Aug 21 20:10:07.387: INFO: Pod "pod-secrets-0567b307-a222-4044-9d60-20413d37d120": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.183290732s STEP: Saw pod success Aug 21 20:10:07.387: INFO: Pod "pod-secrets-0567b307-a222-4044-9d60-20413d37d120" satisfied condition "success or failure" Aug 21 20:10:07.390: INFO: Trying to get logs from node iruya-worker pod pod-secrets-0567b307-a222-4044-9d60-20413d37d120 container secret-env-test: STEP: delete the pod Aug 21 20:10:07.512: INFO: Waiting for pod pod-secrets-0567b307-a222-4044-9d60-20413d37d120 to disappear Aug 21 20:10:07.626: INFO: Pod pod-secrets-0567b307-a222-4044-9d60-20413d37d120 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:10:07.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5689" for this suite. Aug 21 20:10:15.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:10:15.737: INFO: namespace secrets-5689 deletion completed in 8.107641258s • [SLOW TEST:12.643 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:10:15.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 20:10:16.742: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"1a4a6422-d48e-418c-a897-a00ec6656d20", Controller:(*bool)(0xc003fb4bfa), BlockOwnerDeletion:(*bool)(0xc003fb4bfb)}} Aug 21 20:10:16.841: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7d8dc88c-3565-4702-b0f4-fa99f29c6236", Controller:(*bool)(0xc000b334f2), BlockOwnerDeletion:(*bool)(0xc000b334f3)}} Aug 21 20:10:16.863: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"718d3603-93c0-4d3a-ab63-5550b81323d1", Controller:(*bool)(0xc003fb4daa), BlockOwnerDeletion:(*bool)(0xc003fb4dab)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:10:21.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6097" for this suite. Aug 21 20:10:27.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:10:27.980: INFO: namespace gc-6097 deletion completed in 6.0870069s • [SLOW TEST:12.243 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:10:27.981: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7cf0e7f2-d5a8-4b74-a0c2-2c4bbc58c01f STEP: Creating a pod to test consume secrets Aug 21 20:10:28.062: INFO: Waiting up to 5m0s for pod "pod-secrets-70ce6812-af69-4416-b7d9-d06e902ab5b8" in namespace "secrets-6297" to be "success or failure" Aug 21 20:10:28.071: INFO: Pod "pod-secrets-70ce6812-af69-4416-b7d9-d06e902ab5b8": Phase="Pending", Reason="", readiness=false. Elapsed: 9.321196ms Aug 21 20:10:30.075: INFO: Pod "pod-secrets-70ce6812-af69-4416-b7d9-d06e902ab5b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013162095s Aug 21 20:10:32.078: INFO: Pod "pod-secrets-70ce6812-af69-4416-b7d9-d06e902ab5b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01641335s STEP: Saw pod success Aug 21 20:10:32.078: INFO: Pod "pod-secrets-70ce6812-af69-4416-b7d9-d06e902ab5b8" satisfied condition "success or failure" Aug 21 20:10:32.080: INFO: Trying to get logs from node iruya-worker pod pod-secrets-70ce6812-af69-4416-b7d9-d06e902ab5b8 container secret-volume-test: STEP: delete the pod Aug 21 20:10:32.196: INFO: Waiting for pod pod-secrets-70ce6812-af69-4416-b7d9-d06e902ab5b8 to disappear Aug 21 20:10:32.245: INFO: Pod pod-secrets-70ce6812-af69-4416-b7d9-d06e902ab5b8 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Aug 21 20:10:32.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6297" for this suite. Aug 21 20:10:38.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Aug 21 20:10:38.365: INFO: namespace secrets-6297 deletion completed in 6.116217733s • [SLOW TEST:10.384 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Aug 21 20:10:38.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Aug 21 20:10:38.450: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:10:48.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8298" for this suite.
Aug 21 20:11:38.878: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:11:38.954: INFO: namespace kubelet-test-8298 deletion completed in 50.11013607s

• [SLOW TEST:54.265 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:11:38.955: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 21 20:11:39.032: INFO: Waiting up to 5m0s for pod "downwardapi-volume-086af476-71d6-4845-8571-c8801dbc5f71" in namespace "downward-api-3306" to be "success or failure"
Aug 21 20:11:39.036: INFO: Pod "downwardapi-volume-086af476-71d6-4845-8571-c8801dbc5f71": Phase="Pending", Reason="", readiness=false. Elapsed: 3.716425ms
Aug 21 20:11:41.040: INFO: Pod "downwardapi-volume-086af476-71d6-4845-8571-c8801dbc5f71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007634244s
Aug 21 20:11:43.043: INFO: Pod "downwardapi-volume-086af476-71d6-4845-8571-c8801dbc5f71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010641019s
STEP: Saw pod success
Aug 21 20:11:43.043: INFO: Pod "downwardapi-volume-086af476-71d6-4845-8571-c8801dbc5f71" satisfied condition "success or failure"
Aug 21 20:11:43.045: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-086af476-71d6-4845-8571-c8801dbc5f71 container client-container: 
STEP: delete the pod
Aug 21 20:11:43.267: INFO: Waiting for pod downwardapi-volume-086af476-71d6-4845-8571-c8801dbc5f71 to disappear
Aug 21 20:11:43.604: INFO: Pod downwardapi-volume-086af476-71d6-4845-8571-c8801dbc5f71 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:11:43.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3306" for this suite.
Aug 21 20:11:49.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:11:49.919: INFO: namespace downward-api-3306 deletion completed in 6.312177124s

• [SLOW TEST:10.964 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:11:49.920: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Aug 21 20:11:49.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Aug 21 20:11:50.051: INFO: stderr: ""
Aug 21 20:11:50.051: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35471\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:35471/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:11:50.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6454" for this suite.
Aug 21 20:11:56.093: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:11:56.171: INFO: namespace kubectl-6454 deletion completed in 6.116508281s

• [SLOW TEST:6.251 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:11:56.171: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5414
[It] Burst scaling should run to completion even with unhealthy pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating stateful set ss in namespace statefulset-5414
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5414
Aug 21 20:11:56.299: INFO: Found 0 stateful pods, waiting for 1
Aug 21 20:12:06.303: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Aug 21 20:12:06.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 21 20:12:06.549: INFO: stderr: "I0821 20:12:06.431887    2547 log.go:172] (0xc000132840) (0xc000318a00) Create stream\nI0821 20:12:06.431959    2547 log.go:172] (0xc000132840) (0xc000318a00) Stream added, broadcasting: 1\nI0821 20:12:06.437588    2547 log.go:172] (0xc000132840) Reply frame received for 1\nI0821 20:12:06.437634    2547 log.go:172] (0xc000132840) (0xc00079e000) Create stream\nI0821 20:12:06.437657    2547 log.go:172] (0xc000132840) (0xc00079e000) Stream added, broadcasting: 3\nI0821 20:12:06.438748    2547 log.go:172] (0xc000132840) Reply frame received for 3\nI0821 20:12:06.438800    2547 log.go:172] (0xc000132840) (0xc000868000) Create stream\nI0821 20:12:06.438817    2547 log.go:172] (0xc000132840) (0xc000868000) Stream added, broadcasting: 5\nI0821 20:12:06.439904    2547 log.go:172] (0xc000132840) Reply frame received for 5\nI0821 20:12:06.507996    2547 log.go:172] (0xc000132840) Data frame received for 5\nI0821 20:12:06.508025    2547 log.go:172] (0xc000868000) (5) Data frame handling\nI0821 20:12:06.508048    2547 log.go:172] (0xc000868000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0821 20:12:06.537971    2547 log.go:172] (0xc000132840) Data frame received for 3\nI0821 20:12:06.538009    2547 log.go:172] (0xc00079e000) (3) Data frame handling\nI0821 20:12:06.538035    2547 log.go:172] (0xc00079e000) (3) Data frame sent\nI0821 20:12:06.538278    2547 log.go:172] (0xc000132840) Data frame received for 5\nI0821 20:12:06.538304    2547 log.go:172] (0xc000868000) (5) Data frame handling\nI0821 20:12:06.538334    2547 log.go:172] (0xc000132840) Data frame received for 3\nI0821 20:12:06.538352    2547 log.go:172] (0xc00079e000) (3) Data frame handling\nI0821 20:12:06.540130    2547 log.go:172] (0xc000132840) Data frame received for 1\nI0821 20:12:06.540150    2547 log.go:172] (0xc000318a00) (1) Data frame handling\nI0821 20:12:06.540165    2547 log.go:172] (0xc000318a00) (1) Data frame sent\nI0821 20:12:06.540190    2547 log.go:172] (0xc000132840) (0xc000318a00) Stream removed, broadcasting: 1\nI0821 20:12:06.540216    2547 log.go:172] (0xc000132840) Go away received\nI0821 20:12:06.540514    2547 log.go:172] (0xc000132840) (0xc000318a00) Stream removed, broadcasting: 1\nI0821 20:12:06.540537    2547 log.go:172] (0xc000132840) (0xc00079e000) Stream removed, broadcasting: 3\nI0821 20:12:06.540550    2547 log.go:172] (0xc000132840) (0xc000868000) Stream removed, broadcasting: 5\n"
Aug 21 20:12:06.549: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 21 20:12:06.549: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 21 20:12:06.553: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Aug 21 20:12:16.558: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 20:12:16.558: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 20:12:16.570: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 20:12:16.570: INFO: ss-0  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  }]
Aug 21 20:12:16.570: INFO: 
Aug 21 20:12:16.570: INFO: StatefulSet ss has not reached scale 3, at 1
Aug 21 20:12:17.574: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995330003s
Aug 21 20:12:18.580: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991234934s
Aug 21 20:12:19.690: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.985771896s
Aug 21 20:12:20.704: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.875439041s
Aug 21 20:12:21.715: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.860943781s
Aug 21 20:12:22.720: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.850230731s
Aug 21 20:12:23.912: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.84548214s
Aug 21 20:12:24.918: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.652917099s
Aug 21 20:12:25.922: INFO: Verifying statefulset ss doesn't scale past 3 for another 647.541449ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5414
Aug 21 20:12:26.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:12:27.134: INFO: stderr: "I0821 20:12:27.047877    2567 log.go:172] (0xc0008fc370) (0xc0004f4960) Create stream\nI0821 20:12:27.047932    2567 log.go:172] (0xc0008fc370) (0xc0004f4960) Stream added, broadcasting: 1\nI0821 20:12:27.052927    2567 log.go:172] (0xc0008fc370) Reply frame received for 1\nI0821 20:12:27.053001    2567 log.go:172] (0xc0008fc370) (0xc000784000) Create stream\nI0821 20:12:27.053017    2567 log.go:172] (0xc0008fc370) (0xc000784000) Stream added, broadcasting: 3\nI0821 20:12:27.055035    2567 log.go:172] (0xc0008fc370) Reply frame received for 3\nI0821 20:12:27.055091    2567 log.go:172] (0xc0008fc370) (0xc0009e6000) Create stream\nI0821 20:12:27.055121    2567 log.go:172] (0xc0008fc370) (0xc0009e6000) Stream added, broadcasting: 5\nI0821 20:12:27.055991    2567 log.go:172] (0xc0008fc370) Reply frame received for 5\nI0821 20:12:27.124408    2567 log.go:172] (0xc0008fc370) Data frame received for 5\nI0821 20:12:27.124457    2567 log.go:172] (0xc0008fc370) Data frame received for 3\nI0821 20:12:27.124498    2567 log.go:172] (0xc000784000) (3) Data frame handling\nI0821 20:12:27.124528    2567 log.go:172] (0xc000784000) (3) Data frame sent\nI0821 20:12:27.124545    2567 log.go:172] (0xc0008fc370) Data frame received for 3\nI0821 20:12:27.124553    2567 log.go:172] (0xc000784000) (3) Data frame handling\nI0821 20:12:27.124590    2567 log.go:172] (0xc0009e6000) (5) Data frame handling\nI0821 20:12:27.124638    2567 log.go:172] (0xc0009e6000) (5) Data frame sent\nI0821 20:12:27.124653    2567 log.go:172] (0xc0008fc370) Data frame received for 5\nI0821 20:12:27.124663    2567 log.go:172] (0xc0009e6000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0821 20:12:27.125987    2567 log.go:172] (0xc0008fc370) Data frame received for 1\nI0821 20:12:27.126014    2567 log.go:172] (0xc0004f4960) (1) Data frame handling\nI0821 20:12:27.126029    2567 log.go:172] (0xc0004f4960) (1) Data frame sent\nI0821 20:12:27.126044    2567 log.go:172] (0xc0008fc370) (0xc0004f4960) Stream removed, broadcasting: 1\nI0821 20:12:27.126064    2567 log.go:172] (0xc0008fc370) Go away received\nI0821 20:12:27.126423    2567 log.go:172] (0xc0008fc370) (0xc0004f4960) Stream removed, broadcasting: 1\nI0821 20:12:27.126443    2567 log.go:172] (0xc0008fc370) (0xc000784000) Stream removed, broadcasting: 3\nI0821 20:12:27.126452    2567 log.go:172] (0xc0008fc370) (0xc0009e6000) Stream removed, broadcasting: 5\n"
Aug 21 20:12:27.134: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 21 20:12:27.134: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 21 20:12:27.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:12:27.330: INFO: stderr: "I0821 20:12:27.255718    2587 log.go:172] (0xc000a54580) (0xc000500820) Create stream\nI0821 20:12:27.255789    2587 log.go:172] (0xc000a54580) (0xc000500820) Stream added, broadcasting: 1\nI0821 20:12:27.262136    2587 log.go:172] (0xc000a54580) Reply frame received for 1\nI0821 20:12:27.262200    2587 log.go:172] (0xc000a54580) (0xc0006ac140) Create stream\nI0821 20:12:27.262229    2587 log.go:172] (0xc000a54580) (0xc0006ac140) Stream added, broadcasting: 3\nI0821 20:12:27.263085    2587 log.go:172] (0xc000a54580) Reply frame received for 3\nI0821 20:12:27.263113    2587 log.go:172] (0xc000a54580) (0xc000500000) Create stream\nI0821 20:12:27.263121    2587 log.go:172] (0xc000a54580) (0xc000500000) Stream added, broadcasting: 5\nI0821 20:12:27.264206    2587 log.go:172] (0xc000a54580) Reply frame received for 5\nI0821 20:12:27.321930    2587 log.go:172] (0xc000a54580) Data frame received for 5\nI0821 20:12:27.321963    2587 log.go:172] (0xc000500000) (5) Data frame handling\nI0821 20:12:27.321975    2587 log.go:172] (0xc000500000) (5) Data frame sent\nI0821 20:12:27.321985    2587 log.go:172] (0xc000a54580) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0821 20:12:27.322000    2587 log.go:172] (0xc000500000) (5) Data frame handling\nI0821 20:12:27.322037    2587 log.go:172] (0xc000a54580) Data frame received for 3\nI0821 20:12:27.322055    2587 log.go:172] (0xc0006ac140) (3) Data frame handling\nI0821 20:12:27.322065    2587 log.go:172] (0xc0006ac140) (3) Data frame sent\nI0821 20:12:27.322080    2587 log.go:172] (0xc000a54580) Data frame received for 3\nI0821 20:12:27.322094    2587 log.go:172] (0xc0006ac140) (3) Data frame handling\nI0821 20:12:27.323450    2587 log.go:172] (0xc000a54580) Data frame received for 1\nI0821 20:12:27.323468    2587 log.go:172] (0xc000500820) (1) Data frame handling\nI0821 20:12:27.323484    2587 log.go:172] (0xc000500820) (1) Data frame sent\nI0821 20:12:27.323501    2587 log.go:172] (0xc000a54580) (0xc000500820) Stream removed, broadcasting: 1\nI0821 20:12:27.323542    2587 log.go:172] (0xc000a54580) Go away received\nI0821 20:12:27.323785    2587 log.go:172] (0xc000a54580) (0xc000500820) Stream removed, broadcasting: 1\nI0821 20:12:27.323809    2587 log.go:172] (0xc000a54580) (0xc0006ac140) Stream removed, broadcasting: 3\nI0821 20:12:27.323826    2587 log.go:172] (0xc000a54580) (0xc000500000) Stream removed, broadcasting: 5\n"
Aug 21 20:12:27.330: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 21 20:12:27.330: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 21 20:12:27.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:12:27.536: INFO: stderr: "I0821 20:12:27.444028    2607 log.go:172] (0xc0009dc370) (0xc000888820) Create stream\nI0821 20:12:27.444071    2607 log.go:172] (0xc0009dc370) (0xc000888820) Stream added, broadcasting: 1\nI0821 20:12:27.448099    2607 log.go:172] (0xc0009dc370) Reply frame received for 1\nI0821 20:12:27.448151    2607 log.go:172] (0xc0009dc370) (0xc000569b80) Create stream\nI0821 20:12:27.448165    2607 log.go:172] (0xc0009dc370) (0xc000569b80) Stream added, broadcasting: 3\nI0821 20:12:27.448927    2607 log.go:172] (0xc0009dc370) Reply frame received for 3\nI0821 20:12:27.448965    2607 log.go:172] (0xc0009dc370) (0xc0008880a0) Create stream\nI0821 20:12:27.448978    2607 log.go:172] (0xc0009dc370) (0xc0008880a0) Stream added, broadcasting: 5\nI0821 20:12:27.449682    2607 log.go:172] (0xc0009dc370) Reply frame received for 5\nI0821 20:12:27.526151    2607 log.go:172] (0xc0009dc370) Data frame received for 3\nI0821 20:12:27.526265    2607 log.go:172] (0xc000569b80) (3) Data frame handling\nI0821 20:12:27.526287    2607 log.go:172] (0xc000569b80) (3) Data frame sent\nI0821 20:12:27.526298    2607 log.go:172] (0xc0009dc370) Data frame received for 3\nI0821 20:12:27.526304    2607 log.go:172] (0xc000569b80) (3) Data frame handling\nI0821 20:12:27.526337    2607 log.go:172] (0xc0009dc370) Data frame received for 5\nI0821 20:12:27.526345    2607 log.go:172] (0xc0008880a0) (5) Data frame handling\nI0821 20:12:27.526350    2607 log.go:172] (0xc0008880a0) (5) Data frame sent\nI0821 20:12:27.526355    2607 log.go:172] (0xc0009dc370) Data frame received for 5\nI0821 20:12:27.526359    2607 log.go:172] (0xc0008880a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0821 20:12:27.527526    2607 log.go:172] (0xc0009dc370) Data frame received for 1\nI0821 20:12:27.527553    2607 log.go:172] (0xc000888820) (1) Data frame handling\nI0821 20:12:27.527563    2607 log.go:172] (0xc000888820) (1) Data frame sent\nI0821 20:12:27.527578    2607 log.go:172] (0xc0009dc370) (0xc000888820) Stream removed, broadcasting: 1\nI0821 20:12:27.527628    2607 log.go:172] (0xc0009dc370) Go away received\nI0821 20:12:27.527824    2607 log.go:172] (0xc0009dc370) (0xc000888820) Stream removed, broadcasting: 1\nI0821 20:12:27.527833    2607 log.go:172] (0xc0009dc370) (0xc000569b80) Stream removed, broadcasting: 3\nI0821 20:12:27.527838    2607 log.go:172] (0xc0009dc370) (0xc0008880a0) Stream removed, broadcasting: 5\n"
Aug 21 20:12:27.536: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Aug 21 20:12:27.536: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Aug 21 20:12:27.540: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false
Aug 21 20:12:37.542: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 20:12:37.542: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Aug 21 20:12:37.542: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Aug 21 20:12:37.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 21 20:12:37.716: INFO: stderr: "I0821 20:12:37.659891    2629 log.go:172] (0xc0008da420) (0xc0008f86e0) Create stream\nI0821 20:12:37.659949    2629 log.go:172] (0xc0008da420) (0xc0008f86e0) Stream added, broadcasting: 1\nI0821 20:12:37.661991    2629 log.go:172] (0xc0008da420) Reply frame received for 1\nI0821 20:12:37.662018    2629 log.go:172] (0xc0008da420) (0xc0009a4000) Create stream\nI0821 20:12:37.662025    2629 log.go:172] (0xc0008da420) (0xc0009a4000) Stream added, broadcasting: 3\nI0821 20:12:37.662643    2629 log.go:172] (0xc0008da420) Reply frame received for 3\nI0821 20:12:37.662670    2629 log.go:172] (0xc0008da420) (0xc000836000) Create stream\nI0821 20:12:37.662691    2629 log.go:172] (0xc0008da420) (0xc000836000) Stream added, broadcasting: 5\nI0821 20:12:37.663194    2629 log.go:172] (0xc0008da420) Reply frame received for 5\nI0821 20:12:37.710142    2629 log.go:172] (0xc0008da420) Data frame received for 5\nI0821 20:12:37.710168    2629 log.go:172] (0xc000836000) (5) Data frame handling\nI0821 20:12:37.710179    2629 log.go:172] (0xc000836000) (5) Data frame sent\nI0821 20:12:37.710186    2629 log.go:172] (0xc0008da420) Data frame received for 5\nI0821 20:12:37.710191    2629 log.go:172] (0xc000836000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0821 20:12:37.710212    2629 log.go:172] (0xc0008da420) Data frame received for 3\nI0821 20:12:37.710222    2629 log.go:172] (0xc0009a4000) (3) Data frame handling\nI0821 20:12:37.710228    2629 log.go:172] (0xc0009a4000) (3) Data frame sent\nI0821 20:12:37.710233    2629 log.go:172] (0xc0008da420) Data frame received for 3\nI0821 20:12:37.710237    2629 log.go:172] (0xc0009a4000) (3) Data frame handling\nI0821 20:12:37.711137    2629 log.go:172] (0xc0008da420) Data frame received for 1\nI0821 20:12:37.711147    2629 log.go:172] (0xc0008f86e0) (1) Data frame handling\nI0821 20:12:37.711158    2629 log.go:172] (0xc0008f86e0) (1) Data frame sent\nI0821 20:12:37.711174    2629 log.go:172] (0xc0008da420) (0xc0008f86e0) Stream removed, broadcasting: 1\nI0821 20:12:37.711205    2629 log.go:172] (0xc0008da420) Go away received\nI0821 20:12:37.711347    2629 log.go:172] (0xc0008da420) (0xc0008f86e0) Stream removed, broadcasting: 1\nI0821 20:12:37.711355    2629 log.go:172] (0xc0008da420) (0xc0009a4000) Stream removed, broadcasting: 3\nI0821 20:12:37.711360    2629 log.go:172] (0xc0008da420) (0xc000836000) Stream removed, broadcasting: 5\n"
Aug 21 20:12:37.716: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 21 20:12:37.716: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 21 20:12:37.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 21 20:12:38.263: INFO: stderr: "I0821 20:12:37.845597    2648 log.go:172] (0xc0007c2420) (0xc00055c780) Create stream\nI0821 20:12:37.845647    2648 log.go:172] (0xc0007c2420) (0xc00055c780) Stream added, broadcasting: 1\nI0821 20:12:37.847364    2648 log.go:172] (0xc0007c2420) Reply frame received for 1\nI0821 20:12:37.847418    2648 log.go:172] (0xc0007c2420) (0xc0006e0000) Create stream\nI0821 20:12:37.847436    2648 log.go:172] (0xc0007c2420) (0xc0006e0000) Stream added, broadcasting: 3\nI0821 20:12:37.848203    2648 log.go:172] (0xc0007c2420) Reply frame received for 3\nI0821 20:12:37.848238    2648 log.go:172] (0xc0007c2420) (0xc00055c820) Create stream\nI0821 20:12:37.848255    2648 log.go:172] (0xc0007c2420) (0xc00055c820) Stream added, broadcasting: 5\nI0821 20:12:37.849112    2648 log.go:172] (0xc0007c2420) Reply frame received for 5\nI0821 20:12:37.908906    2648 log.go:172] (0xc0007c2420) Data frame received for 5\nI0821 20:12:37.908937    2648 log.go:172] (0xc00055c820) (5) Data frame handling\nI0821 20:12:37.908955    2648 log.go:172] (0xc00055c820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0821 20:12:38.255653    2648 log.go:172] (0xc0007c2420) Data frame received for 5\nI0821 20:12:38.255671    2648 log.go:172] (0xc00055c820) (5) Data frame handling\nI0821 20:12:38.255703    2648 log.go:172] (0xc0007c2420) Data frame received for 3\nI0821 20:12:38.255718    2648 log.go:172] (0xc0006e0000) (3) Data frame handling\nI0821 20:12:38.255731    2648 log.go:172] (0xc0006e0000) (3) Data frame sent\nI0821 20:12:38.255773    2648 log.go:172] (0xc0007c2420) Data frame received for 3\nI0821 20:12:38.255788    2648 log.go:172] (0xc0006e0000) (3) Data frame handling\nI0821 20:12:38.256812    2648 log.go:172] (0xc0007c2420) Data frame received for 1\nI0821 20:12:38.256827    2648 log.go:172] (0xc00055c780) (1) Data frame handling\nI0821 20:12:38.256842    2648 log.go:172] (0xc00055c780) (1) Data frame sent\nI0821 20:12:38.256857    2648 log.go:172] (0xc0007c2420) (0xc00055c780) Stream removed, broadcasting: 1\nI0821 20:12:38.256971    2648 log.go:172] (0xc0007c2420) Go away received\nI0821 20:12:38.257055    2648 log.go:172] (0xc0007c2420) (0xc00055c780) Stream removed, broadcasting: 1\nI0821 20:12:38.257068    2648 log.go:172] (0xc0007c2420) (0xc0006e0000) Stream removed, broadcasting: 3\nI0821 20:12:38.257076    2648 log.go:172] (0xc0007c2420) (0xc00055c820) Stream removed, broadcasting: 5\n"
Aug 21 20:12:38.263: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 21 20:12:38.263: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 21 20:12:38.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Aug 21 20:12:38.684: INFO: stderr: "I0821 20:12:38.484694    2667 log.go:172] (0xc00067e4d0) (0xc00094c8c0) Create stream\nI0821 20:12:38.484824    2667 log.go:172] (0xc00067e4d0) (0xc00094c8c0) Stream added, broadcasting: 1\nI0821 20:12:38.490184    2667 log.go:172] (0xc00067e4d0) Reply frame received for 1\nI0821 20:12:38.490229    2667 log.go:172] (0xc00067e4d0) (0xc00094c000) Create stream\nI0821 20:12:38.490244    2667 log.go:172] (0xc00067e4d0) (0xc00094c000) Stream added, broadcasting: 3\nI0821 20:12:38.491153    2667 log.go:172] (0xc00067e4d0) Reply frame received for 3\nI0821 20:12:38.491172    2667 log.go:172] (0xc00067e4d0) (0xc00055ef00) Create stream\nI0821 20:12:38.491178    2667 log.go:172] (0xc00067e4d0) (0xc00055ef00) Stream added, broadcasting: 5\nI0821 20:12:38.491842    2667 log.go:172] (0xc00067e4d0) Reply frame received for 5\nI0821 20:12:38.545403    2667 log.go:172] (0xc00067e4d0) Data frame received for 5\nI0821 20:12:38.545430    2667 log.go:172] (0xc00055ef00) (5) Data frame handling\nI0821 20:12:38.545448    2667 log.go:172] (0xc00055ef00) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0821 20:12:38.677988    2667 log.go:172] (0xc00067e4d0) Data frame received for 5\nI0821 20:12:38.678035    2667 log.go:172] (0xc00055ef00) (5) Data frame handling\nI0821 20:12:38.678060    2667 log.go:172] (0xc00067e4d0) Data frame received for 3\nI0821 20:12:38.678075    2667 log.go:172] (0xc00094c000) (3) Data frame handling\nI0821 20:12:38.678155    2667 log.go:172] (0xc00094c000) (3) Data frame sent\nI0821 20:12:38.678170    2667 log.go:172] (0xc00067e4d0) Data frame received for 3\nI0821 20:12:38.678186    2667 log.go:172] (0xc00094c000) (3) Data frame handling\nI0821 20:12:38.679577    2667 log.go:172] (0xc00067e4d0) Data frame received for 1\nI0821 20:12:38.679642    2667 log.go:172] (0xc00094c8c0) (1) Data frame handling\nI0821 20:12:38.679654    2667 log.go:172] (0xc00094c8c0) (1) Data frame sent\nI0821 20:12:38.679668    2667 log.go:172] (0xc00067e4d0) (0xc00094c8c0) Stream removed, broadcasting: 1\nI0821 20:12:38.679688    2667 log.go:172] (0xc00067e4d0) Go away received\nI0821 20:12:38.679942    2667 log.go:172] (0xc00067e4d0) (0xc00094c8c0) Stream removed, broadcasting: 1\nI0821 20:12:38.679968    2667 log.go:172] (0xc00067e4d0) (0xc00094c000) Stream removed, broadcasting: 3\nI0821 20:12:38.679976    2667 log.go:172] (0xc00067e4d0) (0xc00055ef00) Stream removed, broadcasting: 5\n"
Aug 21 20:12:38.684: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Aug 21 20:12:38.684: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Aug 21 20:12:38.684: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 20:12:38.756: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Aug 21 20:12:48.768: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 20:12:48.768: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 20:12:48.768: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Aug 21 20:12:48.780: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 20:12:48.780: INFO: ss-0  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  }]
Aug 21 20:12:48.780: INFO: ss-1  iruya-worker   Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  }]
Aug 21 20:12:48.780: INFO: ss-2  iruya-worker2  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  }]
Aug 21 20:12:48.780: INFO: 
Aug 21 20:12:48.780: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 20:12:49.785: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 20:12:49.785: INFO: ss-0  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  }]
Aug 21 20:12:49.785: INFO: ss-1  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  }]
Aug 21 20:12:49.785: INFO: ss-2  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  }]
Aug 21 20:12:49.785: INFO: 
Aug 21 20:12:49.785: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 20:12:50.788: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 20:12:50.788: INFO: ss-0  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  }]
Aug 21 20:12:50.788: INFO: ss-1  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  }]
Aug 21 20:12:50.788: INFO: ss-2  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  }]
Aug 21 20:12:50.788: INFO: 
Aug 21 20:12:50.788: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 20:12:51.797: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 20:12:51.797: INFO: ss-0  iruya-worker2  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  }]
Aug 21 20:12:51.797: INFO: ss-1  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  }]
Aug 21 20:12:51.797: INFO: ss-2  iruya-worker2  Pending  0s     [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  }]
Aug 21 20:12:51.797: INFO: 
Aug 21 20:12:51.797: INFO: StatefulSet ss has not reached scale 0, at 3
Aug 21 20:12:52.801: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 20:12:52.801: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  }]
Aug 21 20:12:52.801: INFO: ss-1  iruya-worker   Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:16 +0000 UTC  }]
Aug 21 20:12:52.801: INFO: 
Aug 21 20:12:52.801: INFO: StatefulSet ss has not reached scale 0, at 2
Aug 21 20:12:53.805: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 20:12:53.805: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  }]
Aug 21 20:12:53.805: INFO: 
Aug 21 20:12:53.805: INFO: StatefulSet ss has not reached scale 0, at 1
Aug 21 20:12:54.809: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 20:12:54.809: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  }]
Aug 21 20:12:54.809: INFO: 
Aug 21 20:12:54.809: INFO: StatefulSet ss has not reached scale 0, at 1
Aug 21 20:12:55.813: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 20:12:55.813: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  }]
Aug 21 20:12:55.813: INFO: 
Aug 21 20:12:55.813: INFO: StatefulSet ss has not reached scale 0, at 1
Aug 21 20:12:56.818: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 20:12:56.818: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  }]
Aug 21 20:12:56.818: INFO: 
Aug 21 20:12:56.818: INFO: StatefulSet ss has not reached scale 0, at 1
Aug 21 20:12:57.821: INFO: POD   NODE           PHASE    GRACE  CONDITIONS
Aug 21 20:12:57.821: INFO: ss-0  iruya-worker2  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:12:38 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-08-21 20:11:56 +0000 UTC  }]
Aug 21 20:12:57.821: INFO: 
Aug 21 20:12:57.821: INFO: StatefulSet ss has not reached scale 0, at 1
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5414
Aug 21 20:12:58.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:12:58.948: INFO: rc: 1
Aug 21 20:12:58.948: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    error: unable to upgrade connection: container not found ("nginx")
 []  0xc002337260 exit status 1   true [0xc000d5a5b0 0xc000d5a5c8 0xc000d5a5e0] [0xc000d5a5b0 0xc000d5a5c8 0xc000d5a5e0] [0xc000d5a5c0 0xc000d5a5d8] [0xba70e0 0xba70e0] 0xc003b53260 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("nginx")

error:
exit status 1
Aug 21 20:13:08.949: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:13:09.320: INFO: rc: 1
Aug 21 20:13:09.320: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028e2480 exit status 1   true [0xc0023af7b0 0xc0023af868 0xc0023af908] [0xc0023af7b0 0xc0023af868 0xc0023af908] [0xc0023af808 0xc0023af8d0] [0xba70e0 0xba70e0] 0xc003dbec00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:13:19.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:13:21.789: INFO: rc: 1
Aug 21 20:13:21.789: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0028e2540 exit status 1   true [0xc0023af920 0xc0023af968 0xc0023afa18] [0xc0023af920 0xc0023af968 0xc0023afa18] [0xc0023af950 0xc0023af9a8] [0xba70e0 0xba70e0] 0xc003dbefc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:13:31.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:13:31.875: INFO: rc: 1
Aug 21 20:13:31.875: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d0a090 exit status 1   true [0xc00035de20 0xc000c6e388 0xc000c6e570] [0xc00035de20 0xc000c6e388 0xc000c6e570] [0xc000c6e020 0xc000c6e518] [0xba70e0 0xba70e0] 0xc002b062a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:13:41.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:13:41.956: INFO: rc: 1
Aug 21 20:13:41.956: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d0a150 exit status 1   true [0xc000c6eb58 0xc000c6efa0 0xc000c6f3f8] [0xc000c6eb58 0xc000c6efa0 0xc000c6f3f8] [0xc000c6ee38 0xc000c6f1b8] [0xba70e0 0xba70e0] 0xc002b06ba0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:13:51.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:13:52.041: INFO: rc: 1
Aug 21 20:13:52.041: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d0a240 exit status 1   true [0xc000c6f590 0xc000c6f910 0xc000c6fad0] [0xc000c6f590 0xc000c6f910 0xc000c6fad0] [0xc000c6f8f0 0xc000c6fa08] [0xba70e0 0xba70e0] 0xc002b07260 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:14:02.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:14:02.133: INFO: rc: 1
Aug 21 20:14:02.133: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0035b00c0 exit status 1   true [0xc00069a028 0xc00069a3e8 0xc00069a770] [0xc00069a028 0xc00069a3e8 0xc00069a770] [0xc00069a3c8 0xc00069a5b0] [0xba70e0 0xba70e0] 0xc002fa8960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:14:12.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:14:12.231: INFO: rc: 1
Aug 21 20:14:12.231: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d0a300 exit status 1   true [0xc000c6fba8 0xc000c6fc40 0xc000c6fd48] [0xc000c6fba8 0xc000c6fc40 0xc000c6fd48] [0xc000c6fc28 0xc000c6fcb8] [0xba70e0 0xba70e0] 0xc002b07800 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:14:22.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:14:22.315: INFO: rc: 1
Aug 21 20:14:22.315: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d0a3c0 exit status 1   true [0xc000c6fe08 0xc000c6ffe8 0xc000d7a010] [0xc000c6fe08 0xc000c6ffe8 0xc000d7a010] [0xc000c6ff38 0xc000d7a008] [0xba70e0 0xba70e0] 0xc002b07e00 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:14:32.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:14:32.395: INFO: rc: 1
Aug 21 20:14:32.395: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d0a4b0 exit status 1   true [0xc000d7a018 0xc000d7a030 0xc000d7a048] [0xc000d7a018 0xc000d7a030 0xc000d7a048] [0xc000d7a028 0xc000d7a040] [0xba70e0 0xba70e0] 0xc003898360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:14:42.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:14:42.502: INFO: rc: 1
Aug 21 20:14:42.502: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d0a570 exit status 1   true [0xc000d7a050 0xc000d7a068 0xc000d7a080] [0xc000d7a050 0xc000d7a068 0xc000d7a080] [0xc000d7a060 0xc000d7a078] [0xba70e0 0xba70e0] 0xc003898840 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:14:52.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:14:52.592: INFO: rc: 1
Aug 21 20:14:52.592: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d0a660 exit status 1   true [0xc000d7a088 0xc000d7a0a0 0xc000d7a0b8] [0xc000d7a088 0xc000d7a0a0 0xc000d7a0b8] [0xc000d7a098 0xc000d7a0b0] [0xba70e0 0xba70e0] 0xc003898d20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:15:02.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:15:02.689: INFO: rc: 1
Aug 21 20:15:02.689: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d94120 exit status 1   true [0xc002c46000 0xc002c46018 0xc002c46030] [0xc002c46000 0xc002c46018 0xc002c46030] [0xc002c46010 0xc002c46028] [0xba70e0 0xba70e0] 0xc002504a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:15:12.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:15:12.785: INFO: rc: 1
Aug 21 20:15:12.785: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d941e0 exit status 1   true [0xc002c46038 0xc002c46050 0xc002c46068] [0xc002c46038 0xc002c46050 0xc002c46068] [0xc002c46048 0xc002c46060] [0xba70e0 0xba70e0] 0xc002504ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:15:22.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:15:22.867: INFO: rc: 1
Aug 21 20:15:22.867: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d942a0 exit status 1   true [0xc002c46070 0xc002c46088 0xc002c460a0] [0xc002c46070 0xc002c46088 0xc002c460a0] [0xc002c46080 0xc002c46098] [0xba70e0 0xba70e0] 0xc0025054a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:15:32.867: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:15:33.011: INFO: rc: 1
Aug 21 20:15:33.011: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d94090 exit status 1   true [0xc000c6e388 0xc000c6e570 0xc000c6ee38] [0xc000c6e388 0xc000c6e570 0xc000c6ee38] [0xc000c6e518 0xc000c6ed08] [0xba70e0 0xba70e0] 0xc002b06a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:15:43.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:15:43.111: INFO: rc: 1
Aug 21 20:15:43.111: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d94180 exit status 1   true [0xc000c6efa0 0xc000c6f3f8 0xc000c6f8f0] [0xc000c6efa0 0xc000c6f3f8 0xc000c6f8f0] [0xc000c6f1b8 0xc000c6f758] [0xba70e0 0xba70e0] 0xc002b07080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:15:53.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:15:53.211: INFO: rc: 1
Aug 21 20:15:53.211: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010b6090 exit status 1   true [0xc00035dc88 0xc002c46000 0xc002c46018] [0xc00035dc88 0xc002c46000 0xc002c46018] [0xc00035df68 0xc002c46010] [0xba70e0 0xba70e0] 0xc002504000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:16:03.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:16:03.317: INFO: rc: 1
Aug 21 20:16:03.317: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010b6150 exit status 1   true [0xc002c46020 0xc002c46038 0xc002c46050] [0xc002c46020 0xc002c46038 0xc002c46050] [0xc002c46030 0xc002c46048] [0xba70e0 0xba70e0] 0xc002504b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:16:13.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:16:13.526: INFO: rc: 1
Aug 21 20:16:13.526: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010b6210 exit status 1   true [0xc002c46058 0xc002c46070 0xc002c46088] [0xc002c46058 0xc002c46070 0xc002c46088] [0xc002c46068 0xc002c46080] [0xba70e0 0xba70e0] 0xc002504fc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:16:23.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:16:23.614: INFO: rc: 1
Aug 21 20:16:23.614: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010b62d0 exit status 1   true [0xc002c46090 0xc002c460a8 0xc002c460c0] [0xc002c46090 0xc002c460a8 0xc002c460c0] [0xc002c460a0 0xc002c460b8] [0xba70e0 0xba70e0] 0xc0025055c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:16:33.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:16:33.722: INFO: rc: 1
Aug 21 20:16:33.722: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010b6390 exit status 1   true [0xc002c460c8 0xc002c460e0 0xc002c460f8] [0xc002c460c8 0xc002c460e0 0xc002c460f8] [0xc002c460d8 0xc002c460f0] [0xba70e0 0xba70e0] 0xc002505c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:16:43.722: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:16:43.815: INFO: rc: 1
Aug 21 20:16:43.815: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d94360 exit status 1   true [0xc000c6f910 0xc000c6fad0 0xc000c6fc28] [0xc000c6f910 0xc000c6fad0 0xc000c6fc28] [0xc000c6fa08 0xc000c6fbd8] [0xba70e0 0xba70e0] 0xc002b076e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:16:53.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:16:53.926: INFO: rc: 1
Aug 21 20:16:53.926: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0010b6480 exit status 1   true [0xc002c46100 0xc002c46118 0xc002c46130] [0xc002c46100 0xc002c46118 0xc002c46130] [0xc002c46110 0xc002c46128] [0xba70e0 0xba70e0] 0xc003898360 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:17:03.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:17:04.028: INFO: rc: 1
Aug 21 20:17:04.028: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d94420 exit status 1   true [0xc000c6fc40 0xc000c6fd48 0xc000c6ff38] [0xc000c6fc40 0xc000c6fd48 0xc000c6ff38] [0xc000c6fcb8 0xc000c6fec8] [0xba70e0 0xba70e0] 0xc002b07c80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:17:14.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:17:14.148: INFO: rc: 1
Aug 21 20:17:14.148: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0035b0150 exit status 1   true [0xc000d7a000 0xc000d7a018 0xc000d7a030] [0xc000d7a000 0xc000d7a018 0xc000d7a030] [0xc000d7a010 0xc000d7a028] [0xba70e0 0xba70e0] 0xc002fa8960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:17:24.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:17:24.244: INFO: rc: 1
Aug 21 20:17:24.244: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d94540 exit status 1   true [0xc000c6ffe8 0xc00069a3c8 0xc00069a5b0] [0xc000c6ffe8 0xc00069a3c8 0xc00069a5b0] [0xc00069a328 0xc00069a3f8] [0xba70e0 0xba70e0] 0xc001ed61e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:17:34.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:17:34.365: INFO: rc: 1
Aug 21 20:17:34.365: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d94630 exit status 1   true [0xc00069a9b0 0xc00069b190 0xc00069b590] [0xc00069a9b0 0xc00069b190 0xc00069b590] [0xc00069ae80 0xc00069b438] [0xba70e0 0xba70e0] 0xc001ed7620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:17:44.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:17:44.459: INFO: rc: 1
Aug 21 20:17:44.459: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d940c0 exit status 1   true [0xc00035dc88 0xc000c6e020 0xc000c6e518] [0xc00035dc88 0xc000c6e020 0xc000c6e518] [0xc00035df68 0xc000c6e468] [0xba70e0 0xba70e0] 0xc002504a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:17:54.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:17:54.549: INFO: rc: 1
Aug 21 20:17:54.549: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc003d0a0c0 exit status 1   true [0xc00069a028 0xc00069a3e8 0xc00069b620] [0xc00069a028 0xc00069a3e8 0xc00069b620] [0xc00069a3c8 0xc00069a5b0] [0xba70e0 0xba70e0] 0xc002b062a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Aug 21 20:18:04.550: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5414 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Aug 21 20:18:04.653: INFO: rc: 1
Aug 21 20:18:04.653: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: 
Aug 21 20:18:04.653: INFO: Scaling statefulset ss to 0
Aug 21 20:18:04.661: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Aug 21 20:18:04.663: INFO: Deleting all statefulset in ns statefulset-5414
Aug 21 20:18:04.665: INFO: Scaling statefulset ss to 0
Aug 21 20:18:04.673: INFO: Waiting for statefulset status.replicas updated to 0
Aug 21 20:18:04.675: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:18:04.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5414" for this suite.
Aug 21 20:18:12.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:18:12.856: INFO: namespace statefulset-5414 deletion completed in 8.096646449s

• [SLOW TEST:376.685 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Burst scaling should run to completion even with unhealthy pods [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:18:12.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-5869
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 21 20:18:12.947: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 21 20:18:50.285: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.23:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5869 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 20:18:50.285: INFO: >>> kubeConfig: /root/.kube/config
I0821 20:18:50.323814       6 log.go:172] (0xc0010c44d0) (0xc002810820) Create stream
I0821 20:18:50.323848       6 log.go:172] (0xc0010c44d0) (0xc002810820) Stream added, broadcasting: 1
I0821 20:18:50.325718       6 log.go:172] (0xc0010c44d0) Reply frame received for 1
I0821 20:18:50.325751       6 log.go:172] (0xc0010c44d0) (0xc0022e1a40) Create stream
I0821 20:18:50.325763       6 log.go:172] (0xc0010c44d0) (0xc0022e1a40) Stream added, broadcasting: 3
I0821 20:18:50.326801       6 log.go:172] (0xc0010c44d0) Reply frame received for 3
I0821 20:18:50.326836       6 log.go:172] (0xc0010c44d0) (0xc0028108c0) Create stream
I0821 20:18:50.326850       6 log.go:172] (0xc0010c44d0) (0xc0028108c0) Stream added, broadcasting: 5
I0821 20:18:50.327825       6 log.go:172] (0xc0010c44d0) Reply frame received for 5
I0821 20:18:50.419006       6 log.go:172] (0xc0010c44d0) Data frame received for 5
I0821 20:18:50.419067       6 log.go:172] (0xc0028108c0) (5) Data frame handling
I0821 20:18:50.419105       6 log.go:172] (0xc0010c44d0) Data frame received for 3
I0821 20:18:50.419125       6 log.go:172] (0xc0022e1a40) (3) Data frame handling
I0821 20:18:50.419145       6 log.go:172] (0xc0022e1a40) (3) Data frame sent
I0821 20:18:50.419164       6 log.go:172] (0xc0010c44d0) Data frame received for 3
I0821 20:18:50.419176       6 log.go:172] (0xc0022e1a40) (3) Data frame handling
I0821 20:18:50.420134       6 log.go:172] (0xc0010c44d0) Data frame received for 1
I0821 20:18:50.420168       6 log.go:172] (0xc002810820) (1) Data frame handling
I0821 20:18:50.420178       6 log.go:172] (0xc002810820) (1) Data frame sent
I0821 20:18:50.420189       6 log.go:172] (0xc0010c44d0) (0xc002810820) Stream removed, broadcasting: 1
I0821 20:18:50.420206       6 log.go:172] (0xc0010c44d0) Go away received
I0821 20:18:50.420397       6 log.go:172] (0xc0010c44d0) (0xc002810820) Stream removed, broadcasting: 1
I0821 20:18:50.420424       6 log.go:172] (0xc0010c44d0) (0xc0022e1a40) Stream removed, broadcasting: 3
I0821 20:18:50.420438       6 log.go:172] (0xc0010c44d0) (0xc0028108c0) Stream removed, broadcasting: 5
Aug 21 20:18:50.420: INFO: Found all expected endpoints: [netserver-0]
Aug 21 20:18:50.514: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.87:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5869 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 20:18:50.514: INFO: >>> kubeConfig: /root/.kube/config
I0821 20:18:50.542931       6 log.go:172] (0xc0006b5ef0) (0xc002ba68c0) Create stream
I0821 20:18:50.542958       6 log.go:172] (0xc0006b5ef0) (0xc002ba68c0) Stream added, broadcasting: 1
I0821 20:18:50.544960       6 log.go:172] (0xc0006b5ef0) Reply frame received for 1
I0821 20:18:50.544995       6 log.go:172] (0xc0006b5ef0) (0xc00105c1e0) Create stream
I0821 20:18:50.545007       6 log.go:172] (0xc0006b5ef0) (0xc00105c1e0) Stream added, broadcasting: 3
I0821 20:18:50.545897       6 log.go:172] (0xc0006b5ef0) Reply frame received for 3
I0821 20:18:50.545950       6 log.go:172] (0xc0006b5ef0) (0xc000101040) Create stream
I0821 20:18:50.545964       6 log.go:172] (0xc0006b5ef0) (0xc000101040) Stream added, broadcasting: 5
I0821 20:18:50.546789       6 log.go:172] (0xc0006b5ef0) Reply frame received for 5
I0821 20:18:50.610729       6 log.go:172] (0xc0006b5ef0) Data frame received for 3
I0821 20:18:50.610759       6 log.go:172] (0xc00105c1e0) (3) Data frame handling
I0821 20:18:50.610774       6 log.go:172] (0xc00105c1e0) (3) Data frame sent
I0821 20:18:50.610784       6 log.go:172] (0xc0006b5ef0) Data frame received for 3
I0821 20:18:50.610788       6 log.go:172] (0xc00105c1e0) (3) Data frame handling
I0821 20:18:50.610876       6 log.go:172] (0xc0006b5ef0) Data frame received for 5
I0821 20:18:50.610893       6 log.go:172] (0xc000101040) (5) Data frame handling
I0821 20:18:50.613295       6 log.go:172] (0xc0006b5ef0) Data frame received for 1
I0821 20:18:50.613318       6 log.go:172] (0xc002ba68c0) (1) Data frame handling
I0821 20:18:50.613334       6 log.go:172] (0xc002ba68c0) (1) Data frame sent
I0821 20:18:50.613351       6 log.go:172] (0xc0006b5ef0) (0xc002ba68c0) Stream removed, broadcasting: 1
I0821 20:18:50.613368       6 log.go:172] (0xc0006b5ef0) Go away received
I0821 20:18:50.613498       6 log.go:172] (0xc0006b5ef0) (0xc002ba68c0) Stream removed, broadcasting: 1
I0821 20:18:50.613535       6 log.go:172] (0xc0006b5ef0) (0xc00105c1e0) Stream removed, broadcasting: 3
I0821 20:18:50.613561       6 log.go:172] (0xc0006b5ef0) (0xc000101040) Stream removed, broadcasting: 5
Aug 21 20:18:50.613: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:18:50.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5869" for this suite.
Aug 21 20:19:16.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:19:16.895: INFO: namespace pod-network-test-5869 deletion completed in 26.277226278s

• [SLOW TEST:64.038 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:19:16.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Aug 21 20:19:17.075: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5728,SelfLink:/api/v1/namespaces/watch-5728/configmaps/e2e-watch-test-resource-version,UID:df0d2523-4c5f-45e8-9101-d465741b871f,ResourceVersion:1633290,Generation:0,CreationTimestamp:2020-08-21 20:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 21 20:19:17.075: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-5728,SelfLink:/api/v1/namespaces/watch-5728/configmaps/e2e-watch-test-resource-version,UID:df0d2523-4c5f-45e8-9101-d465741b871f,ResourceVersion:1633291,Generation:0,CreationTimestamp:2020-08-21 20:19:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:19:17.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5728" for this suite.
Aug 21 20:19:23.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:19:23.150: INFO: namespace watch-5728 deletion completed in 6.070775141s

• [SLOW TEST:6.255 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:19:23.151: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Aug 21 20:19:23.226: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:19:32.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6141" for this suite.
Aug 21 20:19:38.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:19:38.692: INFO: namespace init-container-6141 deletion completed in 6.07662462s

• [SLOW TEST:15.540 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:19:38.692: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Aug 21 20:19:38.945: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Aug 21 20:19:38.991: INFO: Waiting for terminating namespaces to be deleted...
Aug 21 20:19:39.060: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Aug 21 20:19:39.066: INFO: kube-proxy-5zw8s from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 21 20:19:39.066: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 20:19:39.066: INFO: kindnet-nkf5n from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 21 20:19:39.066: INFO: 	Container kindnet-cni ready: true, restart count 0
Aug 21 20:19:39.066: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Aug 21 20:19:39.071: INFO: kube-proxy-b98qt from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 21 20:19:39.071: INFO: 	Container kube-proxy ready: true, restart count 0
Aug 21 20:19:39.071: INFO: kindnet-xsdzz from kube-system started at 2020-08-15 09:35:26 +0000 UTC (1 container statuses recorded)
Aug 21 20:19:39.071: INFO: 	Container kindnet-cni ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.162d61fc4fce4edf], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:19:40.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-3071" for this suite.
Aug 21 20:19:46.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:19:46.167: INFO: namespace sched-pred-3071 deletion completed in 6.076087733s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.475 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:19:46.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 21 20:19:46.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-919'
Aug 21 20:19:46.347: INFO: stderr: ""
Aug 21 20:19:46.347: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Aug 21 20:19:46.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-919'
Aug 21 20:19:53.710: INFO: stderr: ""
Aug 21 20:19:53.710: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:19:53.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-919" for this suite.
Aug 21 20:19:59.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:19:59.914: INFO: namespace kubectl-919 deletion completed in 6.187058877s

• [SLOW TEST:13.747 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:19:59.915: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-j8km
STEP: Creating a pod to test atomic-volume-subpath
Aug 21 20:20:00.221: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-j8km" in namespace "subpath-7347" to be "success or failure"
Aug 21 20:20:00.272: INFO: Pod "pod-subpath-test-configmap-j8km": Phase="Pending", Reason="", readiness=false. Elapsed: 51.161284ms
Aug 21 20:20:02.275: INFO: Pod "pod-subpath-test-configmap-j8km": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054191887s
Aug 21 20:20:04.279: INFO: Pod "pod-subpath-test-configmap-j8km": Phase="Running", Reason="", readiness=true. Elapsed: 4.058385581s
Aug 21 20:20:06.282: INFO: Pod "pod-subpath-test-configmap-j8km": Phase="Running", Reason="", readiness=true. Elapsed: 6.061175085s
Aug 21 20:20:08.285: INFO: Pod "pod-subpath-test-configmap-j8km": Phase="Running", Reason="", readiness=true. Elapsed: 8.064661181s
Aug 21 20:20:10.289: INFO: Pod "pod-subpath-test-configmap-j8km": Phase="Running", Reason="", readiness=true. Elapsed: 10.06803593s
Aug 21 20:20:12.293: INFO: Pod "pod-subpath-test-configmap-j8km": Phase="Running", Reason="", readiness=true. Elapsed: 12.072121303s
Aug 21 20:20:14.297: INFO: Pod "pod-subpath-test-configmap-j8km": Phase="Running", Reason="", readiness=true. Elapsed: 14.076652112s
Aug 21 20:20:16.301: INFO: Pod "pod-subpath-test-configmap-j8km": Phase="Running", Reason="", readiness=true. Elapsed: 16.079755132s
Aug 21 20:20:18.324: INFO: Pod "pod-subpath-test-configmap-j8km": Phase="Running", Reason="", readiness=true. Elapsed: 18.10336678s
Aug 21 20:20:20.327: INFO: Pod "pod-subpath-test-configmap-j8km": Phase="Running", Reason="", readiness=true. Elapsed: 20.106500851s
Aug 21 20:20:22.331: INFO: Pod "pod-subpath-test-configmap-j8km": Phase="Running", Reason="", readiness=true. Elapsed: 22.109898167s
Aug 21 20:20:24.335: INFO: Pod "pod-subpath-test-configmap-j8km": Phase="Running", Reason="", readiness=true. Elapsed: 24.114336639s
Aug 21 20:20:26.339: INFO: Pod "pod-subpath-test-configmap-j8km": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.117708026s
STEP: Saw pod success
Aug 21 20:20:26.339: INFO: Pod "pod-subpath-test-configmap-j8km" satisfied condition "success or failure"
Aug 21 20:20:26.341: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-j8km container test-container-subpath-configmap-j8km: 
STEP: delete the pod
Aug 21 20:20:26.403: INFO: Waiting for pod pod-subpath-test-configmap-j8km to disappear
Aug 21 20:20:26.419: INFO: Pod pod-subpath-test-configmap-j8km no longer exists
STEP: Deleting pod pod-subpath-test-configmap-j8km
Aug 21 20:20:26.419: INFO: Deleting pod "pod-subpath-test-configmap-j8km" in namespace "subpath-7347"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:20:26.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7347" for this suite.
Aug 21 20:20:32.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:20:32.509: INFO: namespace subpath-7347 deletion completed in 6.083899218s

• [SLOW TEST:32.595 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:20:32.510: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Aug 21 20:20:32.744: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:32.773: INFO: Number of nodes with available pods: 0
Aug 21 20:20:32.773: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:20:33.778: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:33.781: INFO: Number of nodes with available pods: 0
Aug 21 20:20:33.781: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:20:34.779: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:34.782: INFO: Number of nodes with available pods: 0
Aug 21 20:20:34.782: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:20:35.977: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:36.247: INFO: Number of nodes with available pods: 0
Aug 21 20:20:36.247: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:20:36.888: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:36.892: INFO: Number of nodes with available pods: 0
Aug 21 20:20:36.892: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:20:37.864: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:37.895: INFO: Number of nodes with available pods: 2
Aug 21 20:20:37.895: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Aug 21 20:20:37.958: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:37.961: INFO: Number of nodes with available pods: 1
Aug 21 20:20:37.961: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:38.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:38.970: INFO: Number of nodes with available pods: 1
Aug 21 20:20:38.970: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:39.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:39.970: INFO: Number of nodes with available pods: 1
Aug 21 20:20:39.970: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:40.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:40.968: INFO: Number of nodes with available pods: 1
Aug 21 20:20:40.968: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:41.965: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:41.968: INFO: Number of nodes with available pods: 1
Aug 21 20:20:41.968: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:42.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:42.968: INFO: Number of nodes with available pods: 1
Aug 21 20:20:42.968: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:43.965: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:43.968: INFO: Number of nodes with available pods: 1
Aug 21 20:20:43.968: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:44.965: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:44.967: INFO: Number of nodes with available pods: 1
Aug 21 20:20:44.967: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:45.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:45.969: INFO: Number of nodes with available pods: 1
Aug 21 20:20:45.969: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:46.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:46.971: INFO: Number of nodes with available pods: 1
Aug 21 20:20:46.971: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:47.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:47.969: INFO: Number of nodes with available pods: 1
Aug 21 20:20:47.969: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:48.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:48.970: INFO: Number of nodes with available pods: 1
Aug 21 20:20:48.970: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:49.967: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:49.971: INFO: Number of nodes with available pods: 1
Aug 21 20:20:49.971: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:50.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:50.969: INFO: Number of nodes with available pods: 1
Aug 21 20:20:50.969: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:51.967: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:51.971: INFO: Number of nodes with available pods: 1
Aug 21 20:20:51.971: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:52.980: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:53.122: INFO: Number of nodes with available pods: 1
Aug 21 20:20:53.122: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:53.965: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:53.967: INFO: Number of nodes with available pods: 1
Aug 21 20:20:53.967: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:54.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:54.970: INFO: Number of nodes with available pods: 1
Aug 21 20:20:54.970: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:55.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:55.970: INFO: Number of nodes with available pods: 1
Aug 21 20:20:55.971: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:56.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:56.969: INFO: Number of nodes with available pods: 1
Aug 21 20:20:56.969: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:20:57.972: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:20:57.975: INFO: Number of nodes with available pods: 2
Aug 21 20:20:57.975: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7764, will wait for the garbage collector to delete the pods
Aug 21 20:20:58.039: INFO: Deleting DaemonSet.extensions daemon-set took: 9.452488ms
Aug 21 20:20:58.339: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.281153ms
Aug 21 20:21:13.743: INFO: Number of nodes with available pods: 0
Aug 21 20:21:13.743: INFO: Number of running nodes: 0, number of available pods: 0
Aug 21 20:21:13.747: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7764/daemonsets","resourceVersion":"1633679"},"items":null}

Aug 21 20:21:13.749: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7764/pods","resourceVersion":"1633679"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:21:13.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7764" for this suite.
Aug 21 20:21:19.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:21:19.857: INFO: namespace daemonsets-7764 deletion completed in 6.089590556s

• [SLOW TEST:47.347 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:21:19.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-a3be5238-dd6e-4fb6-9026-06f6cac729d6
STEP: Creating secret with name s-test-opt-upd-9fa8f4f5-2b02-44cd-bb20-1526ac8cb9d4
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-a3be5238-dd6e-4fb6-9026-06f6cac729d6
STEP: Updating secret s-test-opt-upd-9fa8f4f5-2b02-44cd-bb20-1526ac8cb9d4
STEP: Creating secret with name s-test-opt-create-f60b29ea-73f1-4e48-926e-5ca33680428d
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:23:01.120: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4378" for this suite.
Aug 21 20:23:23.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:23:23.232: INFO: namespace secrets-4378 deletion completed in 22.109385682s

• [SLOW TEST:123.375 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:23:23.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4322.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-4322.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4322.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-4322.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4322.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4322.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4322.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-4322.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4322.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-4322.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4322.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 92.192.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.192.92_udp@PTR;check="$$(dig +tcp +noall +answer +search 92.192.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.192.92_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-4322.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-4322.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-4322.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-4322.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-4322.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-4322.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-4322.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-4322.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-4322.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-4322.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4322.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 92.192.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.192.92_udp@PTR;check="$$(dig +tcp +noall +answer +search 92.192.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.192.92_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Aug 21 20:23:29.382: INFO: Unable to read wheezy_udp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:29.384: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:29.387: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:29.390: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:29.410: INFO: Unable to read jessie_udp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:29.413: INFO: Unable to read jessie_tcp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:29.415: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:29.418: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:29.435: INFO: Lookups using dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd failed for: [wheezy_udp@dns-test-service.dns-4322.svc.cluster.local wheezy_tcp@dns-test-service.dns-4322.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local jessie_udp@dns-test-service.dns-4322.svc.cluster.local jessie_tcp@dns-test-service.dns-4322.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local]

Aug 21 20:23:34.440: INFO: Unable to read wheezy_udp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:34.444: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:34.447: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:34.451: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:34.471: INFO: Unable to read jessie_udp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:34.475: INFO: Unable to read jessie_tcp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:34.477: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:34.480: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:34.497: INFO: Lookups using dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd failed for: [wheezy_udp@dns-test-service.dns-4322.svc.cluster.local wheezy_tcp@dns-test-service.dns-4322.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local jessie_udp@dns-test-service.dns-4322.svc.cluster.local jessie_tcp@dns-test-service.dns-4322.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local]

Aug 21 20:23:39.440: INFO: Unable to read wheezy_udp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:39.444: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:39.447: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:39.449: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:39.468: INFO: Unable to read jessie_udp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:39.471: INFO: Unable to read jessie_tcp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:39.474: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:39.476: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:39.496: INFO: Lookups using dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd failed for: [wheezy_udp@dns-test-service.dns-4322.svc.cluster.local wheezy_tcp@dns-test-service.dns-4322.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local jessie_udp@dns-test-service.dns-4322.svc.cluster.local jessie_tcp@dns-test-service.dns-4322.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local]

Aug 21 20:23:44.749: INFO: Unable to read wheezy_udp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:44.751: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:44.798: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:44.802: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:45.226: INFO: Unable to read jessie_udp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:45.319: INFO: Unable to read jessie_tcp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:45.323: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:45.325: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:45.503: INFO: Lookups using dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd failed for: [wheezy_udp@dns-test-service.dns-4322.svc.cluster.local wheezy_tcp@dns-test-service.dns-4322.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local jessie_udp@dns-test-service.dns-4322.svc.cluster.local jessie_tcp@dns-test-service.dns-4322.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local]

Aug 21 20:23:49.440: INFO: Unable to read wheezy_udp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:49.444: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:49.447: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:49.451: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:49.471: INFO: Unable to read jessie_udp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:49.474: INFO: Unable to read jessie_tcp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:49.477: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:49.479: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:49.494: INFO: Lookups using dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd failed for: [wheezy_udp@dns-test-service.dns-4322.svc.cluster.local wheezy_tcp@dns-test-service.dns-4322.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local jessie_udp@dns-test-service.dns-4322.svc.cluster.local jessie_tcp@dns-test-service.dns-4322.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local]

Aug 21 20:23:54.440: INFO: Unable to read wheezy_udp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:54.445: INFO: Unable to read wheezy_tcp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:54.450: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:54.454: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:54.471: INFO: Unable to read jessie_udp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:54.473: INFO: Unable to read jessie_tcp@dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:54.474: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:54.476: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local from pod dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd: the server could not find the requested resource (get pods dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd)
Aug 21 20:23:54.490: INFO: Lookups using dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd failed for: [wheezy_udp@dns-test-service.dns-4322.svc.cluster.local wheezy_tcp@dns-test-service.dns-4322.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local jessie_udp@dns-test-service.dns-4322.svc.cluster.local jessie_tcp@dns-test-service.dns-4322.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-4322.svc.cluster.local]

Aug 21 20:23:59.513: INFO: DNS probes using dns-4322/dns-test-1acebbfe-aa6b-4e31-8a5b-5515d1cc23dd succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:24:00.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4322" for this suite.
Aug 21 20:24:06.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:24:06.278: INFO: namespace dns-4322 deletion completed in 6.092933261s

• [SLOW TEST:43.046 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:24:06.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Aug 21 20:24:06.367: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3084,SelfLink:/api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-watch-closed,UID:52584d6f-4ac4-411e-b592-8c9ab3625a59,ResourceVersion:1634143,Generation:0,CreationTimestamp:2020-08-21 20:24:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 21 20:24:06.367: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3084,SelfLink:/api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-watch-closed,UID:52584d6f-4ac4-411e-b592-8c9ab3625a59,ResourceVersion:1634144,Generation:0,CreationTimestamp:2020-08-21 20:24:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Aug 21 20:24:06.415: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3084,SelfLink:/api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-watch-closed,UID:52584d6f-4ac4-411e-b592-8c9ab3625a59,ResourceVersion:1634145,Generation:0,CreationTimestamp:2020-08-21 20:24:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 21 20:24:06.415: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-3084,SelfLink:/api/v1/namespaces/watch-3084/configmaps/e2e-watch-test-watch-closed,UID:52584d6f-4ac4-411e-b592-8c9ab3625a59,ResourceVersion:1634146,Generation:0,CreationTimestamp:2020-08-21 20:24:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:24:06.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3084" for this suite.
Aug 21 20:24:12.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:24:12.588: INFO: namespace watch-3084 deletion completed in 6.168008175s

• [SLOW TEST:6.309 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:24:12.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Aug 21 20:24:12.707: INFO: Waiting up to 5m0s for pod "downward-api-be254406-9aee-42bf-a005-d215ac7282b5" in namespace "downward-api-3429" to be "success or failure"
Aug 21 20:24:12.717: INFO: Pod "downward-api-be254406-9aee-42bf-a005-d215ac7282b5": Phase="Pending", Reason="", readiness=false. Elapsed: 10.565424ms
Aug 21 20:24:14.721: INFO: Pod "downward-api-be254406-9aee-42bf-a005-d215ac7282b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014451959s
Aug 21 20:24:17.047: INFO: Pod "downward-api-be254406-9aee-42bf-a005-d215ac7282b5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.340745904s
Aug 21 20:24:19.093: INFO: Pod "downward-api-be254406-9aee-42bf-a005-d215ac7282b5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.386337345s
Aug 21 20:24:21.097: INFO: Pod "downward-api-be254406-9aee-42bf-a005-d215ac7282b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.390378124s
STEP: Saw pod success
Aug 21 20:24:21.097: INFO: Pod "downward-api-be254406-9aee-42bf-a005-d215ac7282b5" satisfied condition "success or failure"
Aug 21 20:24:21.099: INFO: Trying to get logs from node iruya-worker2 pod downward-api-be254406-9aee-42bf-a005-d215ac7282b5 container dapi-container: 
STEP: delete the pod
Aug 21 20:24:21.323: INFO: Waiting for pod downward-api-be254406-9aee-42bf-a005-d215ac7282b5 to disappear
Aug 21 20:24:21.354: INFO: Pod downward-api-be254406-9aee-42bf-a005-d215ac7282b5 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:24:21.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3429" for this suite.
Aug 21 20:24:27.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:24:27.546: INFO: namespace downward-api-3429 deletion completed in 6.188388838s

• [SLOW TEST:14.958 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:24:27.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:24:33.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9794" for this suite.
Aug 21 20:25:27.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:25:28.123: INFO: namespace kubelet-test-9794 deletion completed in 54.124231007s

• [SLOW TEST:60.577 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:25:28.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Aug 21 20:25:28.558: INFO: Waiting up to 5m0s for pod "pod-d80b0af4-9df5-4664-b6b0-c5257fa5202e" in namespace "emptydir-3130" to be "success or failure"
Aug 21 20:25:28.624: INFO: Pod "pod-d80b0af4-9df5-4664-b6b0-c5257fa5202e": Phase="Pending", Reason="", readiness=false. Elapsed: 66.142182ms
Aug 21 20:25:30.627: INFO: Pod "pod-d80b0af4-9df5-4664-b6b0-c5257fa5202e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069188234s
Aug 21 20:25:32.631: INFO: Pod "pod-d80b0af4-9df5-4664-b6b0-c5257fa5202e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073778952s
STEP: Saw pod success
Aug 21 20:25:32.631: INFO: Pod "pod-d80b0af4-9df5-4664-b6b0-c5257fa5202e" satisfied condition "success or failure"
Aug 21 20:25:32.633: INFO: Trying to get logs from node iruya-worker2 pod pod-d80b0af4-9df5-4664-b6b0-c5257fa5202e container test-container: 
STEP: delete the pod
Aug 21 20:25:32.649: INFO: Waiting for pod pod-d80b0af4-9df5-4664-b6b0-c5257fa5202e to disappear
Aug 21 20:25:32.672: INFO: Pod pod-d80b0af4-9df5-4664-b6b0-c5257fa5202e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:25:32.672: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3130" for this suite.
Aug 21 20:25:38.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:25:38.734: INFO: namespace emptydir-3130 deletion completed in 6.06037656s

• [SLOW TEST:10.610 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:25:38.734: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-9198
I0821 20:25:38.804926       6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9198, replica count: 1
I0821 20:25:39.855371       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 20:25:40.855596       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 20:25:41.855782       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 20:25:42.856002       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 20:25:43.856203       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0821 20:25:44.856405       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Aug 21 20:25:45.001: INFO: Created: latency-svc-58927
Aug 21 20:25:45.014: INFO: Got endpoints: latency-svc-58927 [58.001661ms]
Aug 21 20:25:45.045: INFO: Created: latency-svc-khwlb
Aug 21 20:25:45.114: INFO: Got endpoints: latency-svc-khwlb [99.706702ms]
Aug 21 20:25:45.138: INFO: Created: latency-svc-5ngxj
Aug 21 20:25:45.164: INFO: Got endpoints: latency-svc-5ngxj [150.232469ms]
Aug 21 20:25:45.336: INFO: Created: latency-svc-m5d6g
Aug 21 20:25:45.357: INFO: Got endpoints: latency-svc-m5d6g [342.338575ms]
Aug 21 20:25:45.390: INFO: Created: latency-svc-77scr
Aug 21 20:25:45.479: INFO: Got endpoints: latency-svc-77scr [464.408789ms]
Aug 21 20:25:45.511: INFO: Created: latency-svc-ngxvw
Aug 21 20:25:45.523: INFO: Got endpoints: latency-svc-ngxvw [509.008687ms]
Aug 21 20:25:45.557: INFO: Created: latency-svc-kkcf2
Aug 21 20:25:45.748: INFO: Got endpoints: latency-svc-kkcf2 [733.750113ms]
Aug 21 20:25:45.751: INFO: Created: latency-svc-b874f
Aug 21 20:25:45.813: INFO: Got endpoints: latency-svc-b874f [798.624773ms]
Aug 21 20:25:46.869: INFO: Created: latency-svc-gg2jv
Aug 21 20:25:46.914: INFO: Got endpoints: latency-svc-gg2jv [1.899887933s]
Aug 21 20:25:47.283: INFO: Created: latency-svc-zn6k9
Aug 21 20:25:47.288: INFO: Got endpoints: latency-svc-zn6k9 [2.273257303s]
Aug 21 20:25:47.438: INFO: Created: latency-svc-n5z8b
Aug 21 20:25:47.519: INFO: Got endpoints: latency-svc-n5z8b [2.504809718s]
Aug 21 20:25:47.683: INFO: Created: latency-svc-drh4q
Aug 21 20:25:47.688: INFO: Got endpoints: latency-svc-drh4q [2.673123651s]
Aug 21 20:25:47.832: INFO: Created: latency-svc-ht7dc
Aug 21 20:25:47.873: INFO: Got endpoints: latency-svc-ht7dc [2.858163296s]
Aug 21 20:25:47.874: INFO: Created: latency-svc-pk765
Aug 21 20:25:47.886: INFO: Got endpoints: latency-svc-pk765 [2.871944865s]
Aug 21 20:25:47.977: INFO: Created: latency-svc-jn2mj
Aug 21 20:25:48.002: INFO: Got endpoints: latency-svc-jn2mj [2.987567686s]
Aug 21 20:25:48.002: INFO: Created: latency-svc-5tmdf
Aug 21 20:25:48.011: INFO: Got endpoints: latency-svc-5tmdf [2.997236968s]
Aug 21 20:25:48.042: INFO: Created: latency-svc-z4qpc
Aug 21 20:25:48.048: INFO: Got endpoints: latency-svc-z4qpc [2.933832815s]
Aug 21 20:25:48.068: INFO: Created: latency-svc-rbb69
Aug 21 20:25:48.138: INFO: Got endpoints: latency-svc-rbb69 [2.97314939s]
Aug 21 20:25:48.141: INFO: Created: latency-svc-c84qj
Aug 21 20:25:48.150: INFO: Got endpoints: latency-svc-c84qj [2.79364063s]
Aug 21 20:25:48.191: INFO: Created: latency-svc-kjcwn
Aug 21 20:25:48.212: INFO: Got endpoints: latency-svc-kjcwn [2.73300482s]
Aug 21 20:25:48.236: INFO: Created: latency-svc-bfvcv
Aug 21 20:25:48.329: INFO: Got endpoints: latency-svc-bfvcv [2.805624577s]
Aug 21 20:25:48.341: INFO: Created: latency-svc-z4zz7
Aug 21 20:25:48.368: INFO: Got endpoints: latency-svc-z4zz7 [2.619670766s]
Aug 21 20:25:48.395: INFO: Created: latency-svc-krt9s
Aug 21 20:25:48.416: INFO: Got endpoints: latency-svc-krt9s [2.603171426s]
Aug 21 20:25:48.503: INFO: Created: latency-svc-642n2
Aug 21 20:25:48.506: INFO: Got endpoints: latency-svc-642n2 [1.591442063s]
Aug 21 20:25:48.671: INFO: Created: latency-svc-tcvzh
Aug 21 20:25:48.675: INFO: Got endpoints: latency-svc-tcvzh [1.387203878s]
Aug 21 20:25:48.717: INFO: Created: latency-svc-xd6tk
Aug 21 20:25:48.745: INFO: Got endpoints: latency-svc-xd6tk [1.225904505s]
Aug 21 20:25:48.857: INFO: Created: latency-svc-s466x
Aug 21 20:25:48.861: INFO: Got endpoints: latency-svc-s466x [1.173779256s]
Aug 21 20:25:49.024: INFO: Created: latency-svc-4fxcl
Aug 21 20:25:49.077: INFO: Created: latency-svc-xpbhn
Aug 21 20:25:49.078: INFO: Got endpoints: latency-svc-4fxcl [1.205218064s]
Aug 21 20:25:49.088: INFO: Got endpoints: latency-svc-xpbhn [1.20134598s]
Aug 21 20:25:49.186: INFO: Created: latency-svc-8n87d
Aug 21 20:25:49.191: INFO: Got endpoints: latency-svc-8n87d [1.188668711s]
Aug 21 20:25:49.247: INFO: Created: latency-svc-67fld
Aug 21 20:25:49.262: INFO: Got endpoints: latency-svc-67fld [1.250224248s]
Aug 21 20:25:49.385: INFO: Created: latency-svc-tn8rm
Aug 21 20:25:49.406: INFO: Got endpoints: latency-svc-tn8rm [1.358151387s]
Aug 21 20:25:49.433: INFO: Created: latency-svc-5kdn9
Aug 21 20:25:49.448: INFO: Got endpoints: latency-svc-5kdn9 [1.310477715s]
Aug 21 20:25:49.482: INFO: Created: latency-svc-qk4q4
Aug 21 20:25:49.588: INFO: Got endpoints: latency-svc-qk4q4 [1.437618477s]
Aug 21 20:25:49.591: INFO: Created: latency-svc-6d2z2
Aug 21 20:25:49.600: INFO: Got endpoints: latency-svc-6d2z2 [1.387975223s]
Aug 21 20:25:49.627: INFO: Created: latency-svc-d658j
Aug 21 20:25:49.639: INFO: Got endpoints: latency-svc-d658j [1.31007944s]
Aug 21 20:25:49.685: INFO: Created: latency-svc-m8bwk
Aug 21 20:25:49.730: INFO: Got endpoints: latency-svc-m8bwk [1.362382081s]
Aug 21 20:25:49.748: INFO: Created: latency-svc-87rv9
Aug 21 20:25:49.763: INFO: Got endpoints: latency-svc-87rv9 [1.346826707s]
Aug 21 20:25:49.784: INFO: Created: latency-svc-fkdv9
Aug 21 20:25:49.800: INFO: Got endpoints: latency-svc-fkdv9 [1.294416272s]
Aug 21 20:25:49.821: INFO: Created: latency-svc-b7kcg
Aug 21 20:25:49.886: INFO: Got endpoints: latency-svc-b7kcg [1.210774417s]
Aug 21 20:25:49.888: INFO: Created: latency-svc-ws9mt
Aug 21 20:25:49.902: INFO: Got endpoints: latency-svc-ws9mt [1.156469943s]
Aug 21 20:25:49.940: INFO: Created: latency-svc-c45hr
Aug 21 20:25:49.968: INFO: Got endpoints: latency-svc-c45hr [1.106746672s]
Aug 21 20:25:50.030: INFO: Created: latency-svc-b44dg
Aug 21 20:25:50.054: INFO: Got endpoints: latency-svc-b44dg [975.936433ms]
Aug 21 20:25:50.054: INFO: Created: latency-svc-b8xpm
Aug 21 20:25:50.070: INFO: Got endpoints: latency-svc-b8xpm [982.361711ms]
Aug 21 20:25:50.087: INFO: Created: latency-svc-62tzl
Aug 21 20:25:50.111: INFO: Got endpoints: latency-svc-62tzl [919.824226ms]
Aug 21 20:25:50.180: INFO: Created: latency-svc-8dm5l
Aug 21 20:25:50.184: INFO: Got endpoints: latency-svc-8dm5l [922.024855ms]
Aug 21 20:25:50.228: INFO: Created: latency-svc-sl6dh
Aug 21 20:25:50.239: INFO: Got endpoints: latency-svc-sl6dh [833.143837ms]
Aug 21 20:25:50.366: INFO: Created: latency-svc-tg6lx
Aug 21 20:25:50.371: INFO: Got endpoints: latency-svc-tg6lx [922.499699ms]
Aug 21 20:25:50.409: INFO: Created: latency-svc-zhlz4
Aug 21 20:25:50.420: INFO: Got endpoints: latency-svc-zhlz4 [831.572605ms]
Aug 21 20:25:50.439: INFO: Created: latency-svc-t6xv8
Aug 21 20:25:50.450: INFO: Got endpoints: latency-svc-t6xv8 [849.912623ms]
Aug 21 20:25:50.515: INFO: Created: latency-svc-n2pck
Aug 21 20:25:50.519: INFO: Got endpoints: latency-svc-n2pck [880.061996ms]
Aug 21 20:25:50.538: INFO: Created: latency-svc-x5kp6
Aug 21 20:25:50.565: INFO: Got endpoints: latency-svc-x5kp6 [834.703247ms]
Aug 21 20:25:50.582: INFO: Created: latency-svc-79wmx
Aug 21 20:25:50.595: INFO: Got endpoints: latency-svc-79wmx [831.654609ms]
Aug 21 20:25:50.648: INFO: Created: latency-svc-4p229
Aug 21 20:25:50.675: INFO: Got endpoints: latency-svc-4p229 [874.532452ms]
Aug 21 20:25:50.705: INFO: Created: latency-svc-65s8t
Aug 21 20:25:50.721: INFO: Got endpoints: latency-svc-65s8t [835.376362ms]
Aug 21 20:25:50.738: INFO: Created: latency-svc-4bcfp
Aug 21 20:25:50.802: INFO: Got endpoints: latency-svc-4bcfp [900.146502ms]
Aug 21 20:25:50.804: INFO: Created: latency-svc-hd7mm
Aug 21 20:25:50.825: INFO: Got endpoints: latency-svc-hd7mm [856.227841ms]
Aug 21 20:25:50.855: INFO: Created: latency-svc-ttm9z
Aug 21 20:25:50.872: INFO: Got endpoints: latency-svc-ttm9z [817.804641ms]
Aug 21 20:25:50.964: INFO: Created: latency-svc-24g78
Aug 21 20:25:50.971: INFO: Got endpoints: latency-svc-24g78 [900.31948ms]
Aug 21 20:25:51.001: INFO: Created: latency-svc-rmt4r
Aug 21 20:25:51.023: INFO: Got endpoints: latency-svc-rmt4r [912.211017ms]
Aug 21 20:25:51.108: INFO: Created: latency-svc-r4dwz
Aug 21 20:25:51.111: INFO: Got endpoints: latency-svc-r4dwz [927.529675ms]
Aug 21 20:25:51.140: INFO: Created: latency-svc-pgfg4
Aug 21 20:25:51.154: INFO: Got endpoints: latency-svc-pgfg4 [915.140591ms]
Aug 21 20:25:51.183: INFO: Created: latency-svc-m5jtt
Aug 21 20:25:51.198: INFO: Got endpoints: latency-svc-m5jtt [826.961136ms]
Aug 21 20:25:51.263: INFO: Created: latency-svc-wds2l
Aug 21 20:25:51.269: INFO: Got endpoints: latency-svc-wds2l [849.664052ms]
Aug 21 20:25:51.291: INFO: Created: latency-svc-xv2lg
Aug 21 20:25:51.306: INFO: Got endpoints: latency-svc-xv2lg [856.276359ms]
Aug 21 20:25:51.323: INFO: Created: latency-svc-nnfw5
Aug 21 20:25:51.348: INFO: Got endpoints: latency-svc-nnfw5 [828.802024ms]
Aug 21 20:25:51.516: INFO: Created: latency-svc-xwzmt
Aug 21 20:25:51.843: INFO: Got endpoints: latency-svc-xwzmt [1.277587969s]
Aug 21 20:25:52.066: INFO: Created: latency-svc-pqzj8
Aug 21 20:25:52.074: INFO: Got endpoints: latency-svc-pqzj8 [1.478907769s]
Aug 21 20:25:52.152: INFO: Created: latency-svc-5nbw5
Aug 21 20:25:52.245: INFO: Got endpoints: latency-svc-5nbw5 [1.570301496s]
Aug 21 20:25:52.263: INFO: Created: latency-svc-jccmr
Aug 21 20:25:52.290: INFO: Got endpoints: latency-svc-jccmr [1.569114644s]
Aug 21 20:25:52.570: INFO: Created: latency-svc-w7cdl
Aug 21 20:25:52.573: INFO: Got endpoints: latency-svc-w7cdl [1.771454926s]
Aug 21 20:25:52.768: INFO: Created: latency-svc-mbq6w
Aug 21 20:25:52.794: INFO: Got endpoints: latency-svc-mbq6w [1.969360008s]
Aug 21 20:25:52.819: INFO: Created: latency-svc-nbsqt
Aug 21 20:25:52.880: INFO: Got endpoints: latency-svc-nbsqt [2.007867047s]
Aug 21 20:25:52.930: INFO: Created: latency-svc-7kpr7
Aug 21 20:25:52.962: INFO: Got endpoints: latency-svc-7kpr7 [1.991055644s]
Aug 21 20:25:53.136: INFO: Created: latency-svc-7sxlr
Aug 21 20:25:53.142: INFO: Got endpoints: latency-svc-7sxlr [2.118772135s]
Aug 21 20:25:53.355: INFO: Created: latency-svc-wchdp
Aug 21 20:25:53.357: INFO: Got endpoints: latency-svc-wchdp [2.245855978s]
Aug 21 20:25:53.398: INFO: Created: latency-svc-6mdqt
Aug 21 20:25:53.430: INFO: Got endpoints: latency-svc-6mdqt [2.275648241s]
Aug 21 20:25:53.521: INFO: Created: latency-svc-4g9x5
Aug 21 20:25:53.523: INFO: Got endpoints: latency-svc-4g9x5 [2.325401329s]
Aug 21 20:25:53.665: INFO: Created: latency-svc-mpmv4
Aug 21 20:25:53.668: INFO: Got endpoints: latency-svc-mpmv4 [2.398391981s]
Aug 21 20:25:53.720: INFO: Created: latency-svc-7pjzv
Aug 21 20:25:53.737: INFO: Got endpoints: latency-svc-7pjzv [2.430687057s]
Aug 21 20:25:53.809: INFO: Created: latency-svc-c54xx
Aug 21 20:25:53.819: INFO: Got endpoints: latency-svc-c54xx [2.470997777s]
Aug 21 20:25:53.861: INFO: Created: latency-svc-8kmg7
Aug 21 20:25:53.890: INFO: Got endpoints: latency-svc-8kmg7 [2.047855555s]
Aug 21 20:25:53.958: INFO: Created: latency-svc-4xnvc
Aug 21 20:25:53.961: INFO: Got endpoints: latency-svc-4xnvc [1.886803573s]
Aug 21 20:25:54.002: INFO: Created: latency-svc-hrnmv
Aug 21 20:25:54.013: INFO: Got endpoints: latency-svc-hrnmv [1.768126519s]
Aug 21 20:25:54.097: INFO: Created: latency-svc-tt8tp
Aug 21 20:25:54.103: INFO: Got endpoints: latency-svc-tt8tp [1.813090178s]
Aug 21 20:25:54.131: INFO: Created: latency-svc-p7z8d
Aug 21 20:25:54.161: INFO: Got endpoints: latency-svc-p7z8d [1.587068078s]
Aug 21 20:25:54.287: INFO: Created: latency-svc-mll9l
Aug 21 20:25:54.291: INFO: Got endpoints: latency-svc-mll9l [1.496771762s]
Aug 21 20:25:54.329: INFO: Created: latency-svc-vkd46
Aug 21 20:25:54.351: INFO: Got endpoints: latency-svc-vkd46 [1.471018134s]
Aug 21 20:25:54.474: INFO: Created: latency-svc-p5rft
Aug 21 20:25:54.476: INFO: Got endpoints: latency-svc-p5rft [1.514655848s]
Aug 21 20:25:54.511: INFO: Created: latency-svc-9g9rg
Aug 21 20:25:54.525: INFO: Got endpoints: latency-svc-9g9rg [1.382997451s]
Aug 21 20:25:54.544: INFO: Created: latency-svc-l2cm7
Aug 21 20:25:54.562: INFO: Got endpoints: latency-svc-l2cm7 [1.2041255s]
Aug 21 20:25:54.642: INFO: Created: latency-svc-wkcsg
Aug 21 20:25:54.645: INFO: Got endpoints: latency-svc-wkcsg [1.21502516s]
Aug 21 20:25:54.661: INFO: Created: latency-svc-4n7b7
Aug 21 20:25:54.675: INFO: Got endpoints: latency-svc-4n7b7 [1.15201802s]
Aug 21 20:25:54.691: INFO: Created: latency-svc-vv5g4
Aug 21 20:25:54.706: INFO: Got endpoints: latency-svc-vv5g4 [60.394393ms]
Aug 21 20:25:54.726: INFO: Created: latency-svc-9qd78
Aug 21 20:25:54.736: INFO: Got endpoints: latency-svc-9qd78 [1.067840606s]
Aug 21 20:25:54.790: INFO: Created: latency-svc-s9l42
Aug 21 20:25:54.805: INFO: Got endpoints: latency-svc-s9l42 [1.068312424s]
Aug 21 20:25:54.836: INFO: Created: latency-svc-lqrcg
Aug 21 20:25:54.940: INFO: Got endpoints: latency-svc-lqrcg [1.120444691s]
Aug 21 20:25:54.955: INFO: Created: latency-svc-cnfr8
Aug 21 20:25:54.972: INFO: Got endpoints: latency-svc-cnfr8 [1.081277096s]
Aug 21 20:25:54.995: INFO: Created: latency-svc-8qqv8
Aug 21 20:25:55.014: INFO: Got endpoints: latency-svc-8qqv8 [1.053019064s]
Aug 21 20:25:55.038: INFO: Created: latency-svc-5vg7c
Aug 21 20:25:55.078: INFO: Got endpoints: latency-svc-5vg7c [1.064278677s]
Aug 21 20:25:55.094: INFO: Created: latency-svc-dpb72
Aug 21 20:25:55.110: INFO: Got endpoints: latency-svc-dpb72 [1.006318904s]
Aug 21 20:25:55.130: INFO: Created: latency-svc-kbkzw
Aug 21 20:25:55.140: INFO: Got endpoints: latency-svc-kbkzw [979.173056ms]
Aug 21 20:25:55.160: INFO: Created: latency-svc-29vcc
Aug 21 20:25:55.170: INFO: Got endpoints: latency-svc-29vcc [879.543366ms]
Aug 21 20:25:55.246: INFO: Created: latency-svc-hlq92
Aug 21 20:25:55.249: INFO: Got endpoints: latency-svc-hlq92 [897.957696ms]
Aug 21 20:25:55.569: INFO: Created: latency-svc-q79hp
Aug 21 20:25:55.970: INFO: Got endpoints: latency-svc-q79hp [1.494063718s]
Aug 21 20:25:55.975: INFO: Created: latency-svc-d9qzp
Aug 21 20:25:55.981: INFO: Got endpoints: latency-svc-d9qzp [1.456248308s]
Aug 21 20:25:56.010: INFO: Created: latency-svc-97sk5
Aug 21 20:25:56.034: INFO: Got endpoints: latency-svc-97sk5 [1.472846439s]
Aug 21 20:25:56.121: INFO: Created: latency-svc-6zfcd
Aug 21 20:25:56.123: INFO: Got endpoints: latency-svc-6zfcd [1.447549254s]
Aug 21 20:25:56.152: INFO: Created: latency-svc-54btm
Aug 21 20:25:56.166: INFO: Got endpoints: latency-svc-54btm [1.460531719s]
Aug 21 20:25:56.220: INFO: Created: latency-svc-8cwq4
Aug 21 20:25:56.282: INFO: Got endpoints: latency-svc-8cwq4 [1.545969223s]
Aug 21 20:25:56.309: INFO: Created: latency-svc-2vdf9
Aug 21 20:25:56.323: INFO: Got endpoints: latency-svc-2vdf9 [1.517193062s]
Aug 21 20:25:56.380: INFO: Created: latency-svc-pnxt6
Aug 21 20:25:56.443: INFO: Got endpoints: latency-svc-pnxt6 [1.502979321s]
Aug 21 20:25:56.476: INFO: Created: latency-svc-7fdk6
Aug 21 20:25:56.491: INFO: Got endpoints: latency-svc-7fdk6 [1.519525141s]
Aug 21 20:25:56.513: INFO: Created: latency-svc-hnj98
Aug 21 20:25:56.528: INFO: Got endpoints: latency-svc-hnj98 [1.514700851s]
Aug 21 20:25:56.599: INFO: Created: latency-svc-fq64k
Aug 21 20:25:56.601: INFO: Got endpoints: latency-svc-fq64k [1.523082102s]
Aug 21 20:25:56.651: INFO: Created: latency-svc-k8ftn
Aug 21 20:25:56.673: INFO: Got endpoints: latency-svc-k8ftn [1.562840551s]
Aug 21 20:25:56.736: INFO: Created: latency-svc-qmm95
Aug 21 20:25:56.738: INFO: Got endpoints: latency-svc-qmm95 [1.598469805s]
Aug 21 20:25:56.770: INFO: Created: latency-svc-hhsxj
Aug 21 20:25:56.781: INFO: Got endpoints: latency-svc-hhsxj [1.610285265s]
Aug 21 20:25:56.798: INFO: Created: latency-svc-znmnb
Aug 21 20:25:56.811: INFO: Got endpoints: latency-svc-znmnb [1.562332129s]
Aug 21 20:25:56.834: INFO: Created: latency-svc-599mc
Aug 21 20:25:56.868: INFO: Got endpoints: latency-svc-599mc [896.974785ms]
Aug 21 20:25:56.896: INFO: Created: latency-svc-4rkjj
Aug 21 20:25:56.908: INFO: Got endpoints: latency-svc-4rkjj [926.424587ms]
Aug 21 20:25:56.932: INFO: Created: latency-svc-nbhz6
Aug 21 20:25:56.944: INFO: Got endpoints: latency-svc-nbhz6 [909.267727ms]
Aug 21 20:25:57.002: INFO: Created: latency-svc-fd6lh
Aug 21 20:25:57.018: INFO: Got endpoints: latency-svc-fd6lh [895.420781ms]
Aug 21 20:25:57.058: INFO: Created: latency-svc-4krzs
Aug 21 20:25:57.070: INFO: Got endpoints: latency-svc-4krzs [903.86635ms]
Aug 21 20:25:57.125: INFO: Created: latency-svc-x9r9t
Aug 21 20:25:57.149: INFO: Got endpoints: latency-svc-x9r9t [866.642054ms]
Aug 21 20:25:57.149: INFO: Created: latency-svc-rp9c2
Aug 21 20:25:57.163: INFO: Got endpoints: latency-svc-rp9c2 [840.808413ms]
Aug 21 20:25:57.199: INFO: Created: latency-svc-klhth
Aug 21 20:25:57.221: INFO: Got endpoints: latency-svc-klhth [778.53678ms]
Aug 21 20:25:57.287: INFO: Created: latency-svc-jqvcz
Aug 21 20:25:57.310: INFO: Got endpoints: latency-svc-jqvcz [818.321387ms]
Aug 21 20:25:57.311: INFO: Created: latency-svc-k2jcb
Aug 21 20:25:57.324: INFO: Got endpoints: latency-svc-k2jcb [795.43633ms]
Aug 21 20:25:57.340: INFO: Created: latency-svc-tpjb4
Aug 21 20:25:57.354: INFO: Got endpoints: latency-svc-tpjb4 [753.581392ms]
Aug 21 20:25:57.437: INFO: Created: latency-svc-jkcf2
Aug 21 20:25:57.439: INFO: Got endpoints: latency-svc-jkcf2 [765.984495ms]
Aug 21 20:25:57.469: INFO: Created: latency-svc-lcxq7
Aug 21 20:25:57.481: INFO: Got endpoints: latency-svc-lcxq7 [742.544592ms]
Aug 21 20:25:57.502: INFO: Created: latency-svc-hw895
Aug 21 20:25:57.532: INFO: Got endpoints: latency-svc-hw895 [750.773017ms]
Aug 21 20:25:57.586: INFO: Created: latency-svc-4h4jm
Aug 21 20:25:57.618: INFO: Got endpoints: latency-svc-4h4jm [807.184114ms]
Aug 21 20:25:57.666: INFO: Created: latency-svc-ncz4l
Aug 21 20:25:57.772: INFO: Got endpoints: latency-svc-ncz4l [904.24736ms]
Aug 21 20:25:57.773: INFO: Created: latency-svc-8wpc7
Aug 21 20:25:58.070: INFO: Got endpoints: latency-svc-8wpc7 [1.161990364s]
Aug 21 20:25:58.312: INFO: Created: latency-svc-d9fpg
Aug 21 20:25:58.363: INFO: Got endpoints: latency-svc-d9fpg [1.41901315s]
Aug 21 20:25:58.571: INFO: Created: latency-svc-jx88f
Aug 21 20:25:58.599: INFO: Got endpoints: latency-svc-jx88f [1.580990216s]
Aug 21 20:25:58.648: INFO: Created: latency-svc-tzh48
Aug 21 20:25:58.663: INFO: Got endpoints: latency-svc-tzh48 [1.592413042s]
Aug 21 20:25:58.742: INFO: Created: latency-svc-pzdpl
Aug 21 20:25:58.767: INFO: Got endpoints: latency-svc-pzdpl [1.61836223s]
Aug 21 20:25:58.959: INFO: Created: latency-svc-lrhgv
Aug 21 20:25:58.961: INFO: Got endpoints: latency-svc-lrhgv [1.797037576s]
Aug 21 20:25:59.143: INFO: Created: latency-svc-c8mv2
Aug 21 20:25:59.155: INFO: Got endpoints: latency-svc-c8mv2 [1.933637856s]
Aug 21 20:25:59.224: INFO: Created: latency-svc-jvxkj
Aug 21 20:25:59.281: INFO: Got endpoints: latency-svc-jvxkj [1.97121962s]
Aug 21 20:25:59.292: INFO: Created: latency-svc-qjskg
Aug 21 20:25:59.305: INFO: Got endpoints: latency-svc-qjskg [1.981397481s]
Aug 21 20:25:59.323: INFO: Created: latency-svc-bwq7g
Aug 21 20:25:59.336: INFO: Got endpoints: latency-svc-bwq7g [1.981101215s]
Aug 21 20:25:59.365: INFO: Created: latency-svc-2fxxp
Aug 21 20:25:59.378: INFO: Got endpoints: latency-svc-2fxxp [1.939185056s]
Aug 21 20:25:59.425: INFO: Created: latency-svc-nmdtz
Aug 21 20:25:59.428: INFO: Got endpoints: latency-svc-nmdtz [1.946809158s]
Aug 21 20:25:59.477: INFO: Created: latency-svc-skrkp
Aug 21 20:25:59.515: INFO: Got endpoints: latency-svc-skrkp [1.982991379s]
Aug 21 20:25:59.563: INFO: Created: latency-svc-j9vtf
Aug 21 20:25:59.565: INFO: Got endpoints: latency-svc-j9vtf [1.946970999s]
Aug 21 20:25:59.591: INFO: Created: latency-svc-2fc84
Aug 21 20:25:59.603: INFO: Got endpoints: latency-svc-2fc84 [1.830776923s]
Aug 21 20:25:59.621: INFO: Created: latency-svc-n4tsz
Aug 21 20:25:59.638: INFO: Got endpoints: latency-svc-n4tsz [1.567910793s]
Aug 21 20:25:59.713: INFO: Created: latency-svc-vl7cl
Aug 21 20:25:59.715: INFO: Got endpoints: latency-svc-vl7cl [1.352594771s]
Aug 21 20:25:59.744: INFO: Created: latency-svc-x64wp
Aug 21 20:25:59.752: INFO: Got endpoints: latency-svc-x64wp [1.152650029s]
Aug 21 20:25:59.770: INFO: Created: latency-svc-jzn2r
Aug 21 20:25:59.874: INFO: Got endpoints: latency-svc-jzn2r [1.211089218s]
Aug 21 20:25:59.875: INFO: Created: latency-svc-dd2jr
Aug 21 20:25:59.885: INFO: Got endpoints: latency-svc-dd2jr [1.117633812s]
Aug 21 20:25:59.920: INFO: Created: latency-svc-kv2fr
Aug 21 20:25:59.933: INFO: Got endpoints: latency-svc-kv2fr [972.137547ms]
Aug 21 20:25:59.951: INFO: Created: latency-svc-jns6n
Aug 21 20:26:00.018: INFO: Got endpoints: latency-svc-jns6n [862.863565ms]
Aug 21 20:26:00.031: INFO: Created: latency-svc-mx48v
Aug 21 20:26:00.042: INFO: Got endpoints: latency-svc-mx48v [760.558182ms]
Aug 21 20:26:00.060: INFO: Created: latency-svc-bqqdg
Aug 21 20:26:00.071: INFO: Got endpoints: latency-svc-bqqdg [766.08704ms]
Aug 21 20:26:00.089: INFO: Created: latency-svc-pb2bn
Aug 21 20:26:00.102: INFO: Got endpoints: latency-svc-pb2bn [766.205841ms]
Aug 21 20:26:00.179: INFO: Created: latency-svc-g8fd2
Aug 21 20:26:00.182: INFO: Got endpoints: latency-svc-g8fd2 [803.535928ms]
Aug 21 20:26:00.223: INFO: Created: latency-svc-62mc4
Aug 21 20:26:00.234: INFO: Got endpoints: latency-svc-62mc4 [806.529863ms]
Aug 21 20:26:00.251: INFO: Created: latency-svc-whh9v
Aug 21 20:26:00.265: INFO: Got endpoints: latency-svc-whh9v [750.325432ms]
Aug 21 20:26:00.323: INFO: Created: latency-svc-ql76v
Aug 21 20:26:00.327: INFO: Got endpoints: latency-svc-ql76v [761.058358ms]
Aug 21 20:26:00.355: INFO: Created: latency-svc-xrsxd
Aug 21 20:26:00.367: INFO: Got endpoints: latency-svc-xrsxd [764.721232ms]
Aug 21 20:26:00.385: INFO: Created: latency-svc-6f89r
Aug 21 20:26:00.397: INFO: Got endpoints: latency-svc-6f89r [759.890417ms]
Aug 21 20:26:00.460: INFO: Created: latency-svc-6g8l6
Aug 21 20:26:00.470: INFO: Got endpoints: latency-svc-6g8l6 [754.516134ms]
Aug 21 20:26:00.490: INFO: Created: latency-svc-bl8rx
Aug 21 20:26:00.520: INFO: Got endpoints: latency-svc-bl8rx [768.249038ms]
Aug 21 20:26:00.553: INFO: Created: latency-svc-ml7dm
Aug 21 20:26:00.616: INFO: Got endpoints: latency-svc-ml7dm [742.412284ms]
Aug 21 20:26:00.619: INFO: Created: latency-svc-7knm2
Aug 21 20:26:00.638: INFO: Got endpoints: latency-svc-7knm2 [752.850459ms]
Aug 21 20:26:00.713: INFO: Created: latency-svc-qnc4s
Aug 21 20:26:00.796: INFO: Got endpoints: latency-svc-qnc4s [863.330375ms]
Aug 21 20:26:00.799: INFO: Created: latency-svc-g89f4
Aug 21 20:26:00.811: INFO: Got endpoints: latency-svc-g89f4 [793.143186ms]
Aug 21 20:26:00.853: INFO: Created: latency-svc-cmtw6
Aug 21 20:26:00.883: INFO: Got endpoints: latency-svc-cmtw6 [841.492997ms]
Aug 21 20:26:00.988: INFO: Created: latency-svc-pmnfm
Aug 21 20:26:01.009: INFO: Got endpoints: latency-svc-pmnfm [937.746587ms]
Aug 21 20:26:01.011: INFO: Created: latency-svc-cqtdf
Aug 21 20:26:01.033: INFO: Got endpoints: latency-svc-cqtdf [931.169248ms]
Aug 21 20:26:01.072: INFO: Created: latency-svc-qhzs5
Aug 21 20:26:01.137: INFO: Got endpoints: latency-svc-qhzs5 [955.810012ms]
Aug 21 20:26:01.139: INFO: Created: latency-svc-8rzr9
Aug 21 20:26:01.148: INFO: Got endpoints: latency-svc-8rzr9 [913.697774ms]
Aug 21 20:26:01.169: INFO: Created: latency-svc-w2ldm
Aug 21 20:26:01.185: INFO: Got endpoints: latency-svc-w2ldm [919.532408ms]
Aug 21 20:26:01.207: INFO: Created: latency-svc-cl48x
Aug 21 20:26:01.221: INFO: Got endpoints: latency-svc-cl48x [894.428995ms]
Aug 21 20:26:01.275: INFO: Created: latency-svc-ldqsf
Aug 21 20:26:01.278: INFO: Got endpoints: latency-svc-ldqsf [910.479634ms]
Aug 21 20:26:01.318: INFO: Created: latency-svc-5b68q
Aug 21 20:26:01.330: INFO: Got endpoints: latency-svc-5b68q [932.102566ms]
Aug 21 20:26:01.350: INFO: Created: latency-svc-rt8ng
Aug 21 20:26:01.360: INFO: Got endpoints: latency-svc-rt8ng [889.849813ms]
Aug 21 20:26:01.426: INFO: Created: latency-svc-cjbcl
Aug 21 20:26:01.428: INFO: Got endpoints: latency-svc-cjbcl [907.598921ms]
Aug 21 20:26:01.484: INFO: Created: latency-svc-q7ldc
Aug 21 20:26:01.510: INFO: Got endpoints: latency-svc-q7ldc [893.877843ms]
Aug 21 20:26:01.563: INFO: Created: latency-svc-mf59k
Aug 21 20:26:01.592: INFO: Got endpoints: latency-svc-mf59k [954.086013ms]
Aug 21 20:26:01.592: INFO: Created: latency-svc-bk8l9
Aug 21 20:26:01.607: INFO: Got endpoints: latency-svc-bk8l9 [810.515736ms]
Aug 21 20:26:01.627: INFO: Created: latency-svc-92sp8
Aug 21 20:26:01.643: INFO: Got endpoints: latency-svc-92sp8 [832.06881ms]
Aug 21 20:26:01.714: INFO: Created: latency-svc-49xz2
Aug 21 20:26:01.716: INFO: Got endpoints: latency-svc-49xz2 [832.850216ms]
Aug 21 20:26:01.751: INFO: Created: latency-svc-c4sf4
Aug 21 20:26:01.763: INFO: Got endpoints: latency-svc-c4sf4 [754.121029ms]
Aug 21 20:26:01.786: INFO: Created: latency-svc-lmsg6
Aug 21 20:26:01.801: INFO: Got endpoints: latency-svc-lmsg6 [767.348811ms]
Aug 21 20:26:01.850: INFO: Created: latency-svc-fng7z
Aug 21 20:26:01.853: INFO: Got endpoints: latency-svc-fng7z [715.696334ms]
Aug 21 20:26:01.880: INFO: Created: latency-svc-mp5jw
Aug 21 20:26:01.890: INFO: Got endpoints: latency-svc-mp5jw [742.155933ms]
Aug 21 20:26:01.912: INFO: Created: latency-svc-l9hgn
Aug 21 20:26:01.927: INFO: Got endpoints: latency-svc-l9hgn [742.199437ms]
Aug 21 20:26:01.949: INFO: Created: latency-svc-mkbnf
Aug 21 20:26:02.000: INFO: Got endpoints: latency-svc-mkbnf [778.735486ms]
Aug 21 20:26:02.031: INFO: Created: latency-svc-tmj9m
Aug 21 20:26:02.053: INFO: Got endpoints: latency-svc-tmj9m [775.252933ms]
Aug 21 20:26:02.071: INFO: Created: latency-svc-hbh8d
Aug 21 20:26:02.083: INFO: Got endpoints: latency-svc-hbh8d [753.648103ms]
Aug 21 20:26:02.144: INFO: Created: latency-svc-cr96x
Aug 21 20:26:02.146: INFO: Got endpoints: latency-svc-cr96x [786.471722ms]
Aug 21 20:26:02.204: INFO: Created: latency-svc-bbjmd
Aug 21 20:26:02.222: INFO: Got endpoints: latency-svc-bbjmd [794.007582ms]
Aug 21 20:26:02.288: INFO: Created: latency-svc-vnn9c
Aug 21 20:26:02.291: INFO: Got endpoints: latency-svc-vnn9c [780.285735ms]
Aug 21 20:26:02.321: INFO: Created: latency-svc-lqwqb
Aug 21 20:26:02.337: INFO: Got endpoints: latency-svc-lqwqb [745.087101ms]
Aug 21 20:26:02.366: INFO: Created: latency-svc-krs26
Aug 21 20:26:02.379: INFO: Got endpoints: latency-svc-krs26 [771.829897ms]
Aug 21 20:26:02.379: INFO: Latencies: [60.394393ms 99.706702ms 150.232469ms 342.338575ms 464.408789ms 509.008687ms 715.696334ms 733.750113ms 742.155933ms 742.199437ms 742.412284ms 742.544592ms 745.087101ms 750.325432ms 750.773017ms 752.850459ms 753.581392ms 753.648103ms 754.121029ms 754.516134ms 759.890417ms 760.558182ms 761.058358ms 764.721232ms 765.984495ms 766.08704ms 766.205841ms 767.348811ms 768.249038ms 771.829897ms 775.252933ms 778.53678ms 778.735486ms 780.285735ms 786.471722ms 793.143186ms 794.007582ms 795.43633ms 798.624773ms 803.535928ms 806.529863ms 807.184114ms 810.515736ms 817.804641ms 818.321387ms 826.961136ms 828.802024ms 831.572605ms 831.654609ms 832.06881ms 832.850216ms 833.143837ms 834.703247ms 835.376362ms 840.808413ms 841.492997ms 849.664052ms 849.912623ms 856.227841ms 856.276359ms 862.863565ms 863.330375ms 866.642054ms 874.532452ms 879.543366ms 880.061996ms 889.849813ms 893.877843ms 894.428995ms 895.420781ms 896.974785ms 897.957696ms 900.146502ms 900.31948ms 903.86635ms 904.24736ms 907.598921ms 909.267727ms 910.479634ms 912.211017ms 913.697774ms 915.140591ms 919.532408ms 919.824226ms 922.024855ms 922.499699ms 926.424587ms 927.529675ms 931.169248ms 932.102566ms 937.746587ms 954.086013ms 955.810012ms 972.137547ms 975.936433ms 979.173056ms 982.361711ms 1.006318904s 1.053019064s 1.064278677s 1.067840606s 1.068312424s 1.081277096s 1.106746672s 1.117633812s 1.120444691s 1.15201802s 1.152650029s 1.156469943s 1.161990364s 1.173779256s 1.188668711s 1.20134598s 1.2041255s 1.205218064s 1.210774417s 1.211089218s 1.21502516s 1.225904505s 1.250224248s 1.277587969s 1.294416272s 1.31007944s 1.310477715s 1.346826707s 1.352594771s 1.358151387s 1.362382081s 1.382997451s 1.387203878s 1.387975223s 1.41901315s 1.437618477s 1.447549254s 1.456248308s 1.460531719s 1.471018134s 1.472846439s 1.478907769s 1.494063718s 1.496771762s 1.502979321s 1.514655848s 1.514700851s 1.517193062s 1.519525141s 1.523082102s 1.545969223s 1.562332129s 1.562840551s 1.567910793s 1.569114644s 1.570301496s 1.580990216s 1.587068078s 1.591442063s 1.592413042s 1.598469805s 1.610285265s 1.61836223s 1.768126519s 1.771454926s 1.797037576s 1.813090178s 1.830776923s 1.886803573s 1.899887933s 1.933637856s 1.939185056s 1.946809158s 1.946970999s 1.969360008s 1.97121962s 1.981101215s 1.981397481s 1.982991379s 1.991055644s 2.007867047s 2.047855555s 2.118772135s 2.245855978s 2.273257303s 2.275648241s 2.325401329s 2.398391981s 2.430687057s 2.470997777s 2.504809718s 2.603171426s 2.619670766s 2.673123651s 2.73300482s 2.79364063s 2.805624577s 2.858163296s 2.871944865s 2.933832815s 2.97314939s 2.987567686s 2.997236968s]
Aug 21 20:26:02.379: INFO: 50 %ile: 1.067840606s
Aug 21 20:26:02.379: INFO: 90 %ile: 2.245855978s
Aug 21 20:26:02.379: INFO: 99 %ile: 2.987567686s
Aug 21 20:26:02.379: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:26:02.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9198" for this suite.
Aug 21 20:26:44.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:26:44.520: INFO: namespace svc-latency-9198 deletion completed in 42.134556316s

• [SLOW TEST:65.786 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:26:44.520: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Aug 21 20:26:45.381: INFO: Waiting up to 5m0s for pod "pod-66af2846-abb8-443e-9eab-c0e4d32a6864" in namespace "emptydir-4139" to be "success or failure"
Aug 21 20:26:45.420: INFO: Pod "pod-66af2846-abb8-443e-9eab-c0e4d32a6864": Phase="Pending", Reason="", readiness=false. Elapsed: 38.758842ms
Aug 21 20:26:47.642: INFO: Pod "pod-66af2846-abb8-443e-9eab-c0e4d32a6864": Phase="Pending", Reason="", readiness=false. Elapsed: 2.260659113s
Aug 21 20:26:49.870: INFO: Pod "pod-66af2846-abb8-443e-9eab-c0e4d32a6864": Phase="Pending", Reason="", readiness=false. Elapsed: 4.489168997s
Aug 21 20:26:51.873: INFO: Pod "pod-66af2846-abb8-443e-9eab-c0e4d32a6864": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.492091035s
STEP: Saw pod success
Aug 21 20:26:51.873: INFO: Pod "pod-66af2846-abb8-443e-9eab-c0e4d32a6864" satisfied condition "success or failure"
Aug 21 20:26:51.875: INFO: Trying to get logs from node iruya-worker2 pod pod-66af2846-abb8-443e-9eab-c0e4d32a6864 container test-container: 
STEP: delete the pod
Aug 21 20:26:51.949: INFO: Waiting for pod pod-66af2846-abb8-443e-9eab-c0e4d32a6864 to disappear
Aug 21 20:26:51.982: INFO: Pod pod-66af2846-abb8-443e-9eab-c0e4d32a6864 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:26:51.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4139" for this suite.
Aug 21 20:26:57.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:26:58.058: INFO: namespace emptydir-4139 deletion completed in 6.072739262s

• [SLOW TEST:13.538 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:26:58.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 21 20:26:58.144: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d2269423-53fa-4f3d-a159-c753617d5da1" in namespace "downward-api-3188" to be "success or failure"
Aug 21 20:26:58.159: INFO: Pod "downwardapi-volume-d2269423-53fa-4f3d-a159-c753617d5da1": Phase="Pending", Reason="", readiness=false. Elapsed: 14.160498ms
Aug 21 20:27:00.162: INFO: Pod "downwardapi-volume-d2269423-53fa-4f3d-a159-c753617d5da1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01757s
Aug 21 20:27:02.186: INFO: Pod "downwardapi-volume-d2269423-53fa-4f3d-a159-c753617d5da1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041934133s
STEP: Saw pod success
Aug 21 20:27:02.186: INFO: Pod "downwardapi-volume-d2269423-53fa-4f3d-a159-c753617d5da1" satisfied condition "success or failure"
Aug 21 20:27:02.189: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-d2269423-53fa-4f3d-a159-c753617d5da1 container client-container: 
STEP: delete the pod
Aug 21 20:27:02.204: INFO: Waiting for pod downwardapi-volume-d2269423-53fa-4f3d-a159-c753617d5da1 to disappear
Aug 21 20:27:02.230: INFO: Pod downwardapi-volume-d2269423-53fa-4f3d-a159-c753617d5da1 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:27:02.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3188" for this suite.
Aug 21 20:27:08.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:27:08.309: INFO: namespace downward-api-3188 deletion completed in 6.075422681s

• [SLOW TEST:10.249 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:27:08.309: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Aug 21 20:27:08.372: INFO: Waiting up to 5m0s for pod "pod-723eec07-5080-42d2-8a78-1d3f1355ab78" in namespace "emptydir-2190" to be "success or failure"
Aug 21 20:27:08.383: INFO: Pod "pod-723eec07-5080-42d2-8a78-1d3f1355ab78": Phase="Pending", Reason="", readiness=false. Elapsed: 10.42265ms
Aug 21 20:27:10.387: INFO: Pod "pod-723eec07-5080-42d2-8a78-1d3f1355ab78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014736342s
Aug 21 20:27:12.390: INFO: Pod "pod-723eec07-5080-42d2-8a78-1d3f1355ab78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017701279s
STEP: Saw pod success
Aug 21 20:27:12.390: INFO: Pod "pod-723eec07-5080-42d2-8a78-1d3f1355ab78" satisfied condition "success or failure"
Aug 21 20:27:12.392: INFO: Trying to get logs from node iruya-worker2 pod pod-723eec07-5080-42d2-8a78-1d3f1355ab78 container test-container: 
STEP: delete the pod
Aug 21 20:27:12.432: INFO: Waiting for pod pod-723eec07-5080-42d2-8a78-1d3f1355ab78 to disappear
Aug 21 20:27:12.437: INFO: Pod pod-723eec07-5080-42d2-8a78-1d3f1355ab78 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:27:12.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2190" for this suite.
Aug 21 20:27:18.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:27:18.544: INFO: namespace emptydir-2190 deletion completed in 6.103744823s

• [SLOW TEST:10.235 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:27:18.544: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Aug 21 20:27:18.582: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:27:18.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-629" for this suite.
Aug 21 20:27:24.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:27:24.739: INFO: namespace kubectl-629 deletion completed in 6.065268681s

• [SLOW TEST:6.195 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:27:24.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Aug 21 20:27:24.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-7489'
Aug 21 20:27:27.741: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Aug 21 20:27:27.741: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Aug 21 20:27:27.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-7489'
Aug 21 20:27:27.861: INFO: stderr: ""
Aug 21 20:27:27.861: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:27:27.861: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7489" for this suite.
Aug 21 20:27:33.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:27:33.945: INFO: namespace kubectl-7489 deletion completed in 6.081093442s

• [SLOW TEST:9.206 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:27:33.946: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 21 20:27:34.002: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51557c5c-38b9-4fc9-8816-3f0a33ed36a8" in namespace "projected-2924" to be "success or failure"
Aug 21 20:27:34.006: INFO: Pod "downwardapi-volume-51557c5c-38b9-4fc9-8816-3f0a33ed36a8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.268214ms
Aug 21 20:27:36.031: INFO: Pod "downwardapi-volume-51557c5c-38b9-4fc9-8816-3f0a33ed36a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0292912s
Aug 21 20:27:38.041: INFO: Pod "downwardapi-volume-51557c5c-38b9-4fc9-8816-3f0a33ed36a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038951354s
STEP: Saw pod success
Aug 21 20:27:38.041: INFO: Pod "downwardapi-volume-51557c5c-38b9-4fc9-8816-3f0a33ed36a8" satisfied condition "success or failure"
Aug 21 20:27:38.043: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-51557c5c-38b9-4fc9-8816-3f0a33ed36a8 container client-container: 
STEP: delete the pod
Aug 21 20:27:38.098: INFO: Waiting for pod downwardapi-volume-51557c5c-38b9-4fc9-8816-3f0a33ed36a8 to disappear
Aug 21 20:27:38.108: INFO: Pod downwardapi-volume-51557c5c-38b9-4fc9-8816-3f0a33ed36a8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:27:38.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2924" for this suite.
Aug 21 20:27:44.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:27:44.206: INFO: namespace projected-2924 deletion completed in 6.095333552s

• [SLOW TEST:10.261 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:27:44.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 21 20:27:44.273: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67a214b8-2655-4efc-a2ba-90bc8f01c125" in namespace "downward-api-48" to be "success or failure"
Aug 21 20:27:44.293: INFO: Pod "downwardapi-volume-67a214b8-2655-4efc-a2ba-90bc8f01c125": Phase="Pending", Reason="", readiness=false. Elapsed: 20.304882ms
Aug 21 20:27:46.297: INFO: Pod "downwardapi-volume-67a214b8-2655-4efc-a2ba-90bc8f01c125": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023926176s
Aug 21 20:27:48.301: INFO: Pod "downwardapi-volume-67a214b8-2655-4efc-a2ba-90bc8f01c125": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02764708s
STEP: Saw pod success
Aug 21 20:27:48.301: INFO: Pod "downwardapi-volume-67a214b8-2655-4efc-a2ba-90bc8f01c125" satisfied condition "success or failure"
Aug 21 20:27:48.303: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-67a214b8-2655-4efc-a2ba-90bc8f01c125 container client-container: 
STEP: delete the pod
Aug 21 20:27:48.331: INFO: Waiting for pod downwardapi-volume-67a214b8-2655-4efc-a2ba-90bc8f01c125 to disappear
Aug 21 20:27:48.336: INFO: Pod downwardapi-volume-67a214b8-2655-4efc-a2ba-90bc8f01c125 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:27:48.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-48" for this suite.
Aug 21 20:27:54.351: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:27:54.411: INFO: namespace downward-api-48 deletion completed in 6.072895677s

• [SLOW TEST:10.205 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:27:54.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Aug 21 20:27:58.537: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Aug 21 20:28:13.648: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:28:13.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8419" for this suite.
Aug 21 20:28:19.667: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:28:19.724: INFO: namespace pods-8419 deletion completed in 6.067919815s

• [SLOW TEST:25.312 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:28:19.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Aug 21 20:28:19.781: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f20a8edc-6c39-4124-af73-e2edabd41661" in namespace "downward-api-3496" to be "success or failure"
Aug 21 20:28:19.792: INFO: Pod "downwardapi-volume-f20a8edc-6c39-4124-af73-e2edabd41661": Phase="Pending", Reason="", readiness=false. Elapsed: 10.700398ms
Aug 21 20:28:21.888: INFO: Pod "downwardapi-volume-f20a8edc-6c39-4124-af73-e2edabd41661": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106767528s
Aug 21 20:28:23.891: INFO: Pod "downwardapi-volume-f20a8edc-6c39-4124-af73-e2edabd41661": Phase="Running", Reason="", readiness=true. Elapsed: 4.109635879s
Aug 21 20:28:25.894: INFO: Pod "downwardapi-volume-f20a8edc-6c39-4124-af73-e2edabd41661": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.112782286s
STEP: Saw pod success
Aug 21 20:28:25.894: INFO: Pod "downwardapi-volume-f20a8edc-6c39-4124-af73-e2edabd41661" satisfied condition "success or failure"
Aug 21 20:28:25.896: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f20a8edc-6c39-4124-af73-e2edabd41661 container client-container: 
STEP: delete the pod
Aug 21 20:28:25.961: INFO: Waiting for pod downwardapi-volume-f20a8edc-6c39-4124-af73-e2edabd41661 to disappear
Aug 21 20:28:25.977: INFO: Pod downwardapi-volume-f20a8edc-6c39-4124-af73-e2edabd41661 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:28:25.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3496" for this suite.
Aug 21 20:28:31.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:28:32.042: INFO: namespace downward-api-3496 deletion completed in 6.062996291s

• [SLOW TEST:12.317 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:28:32.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0821 20:28:42.157943       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Aug 21 20:28:42.158: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:28:42.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5800" for this suite.
Aug 21 20:28:48.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:28:48.250: INFO: namespace gc-5800 deletion completed in 6.088780571s

• [SLOW TEST:16.207 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:28:48.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 21 20:28:48.333: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-65bdefd8-6e56-48b5-8918-b8c325678b65
STEP: Creating a pod to test consume configMaps
Aug 21 20:28:55.016: INFO: Waiting up to 5m0s for pod "pod-configmaps-0104cfb7-8b2c-43c8-a73f-1e957173f3da" in namespace "configmap-7734" to be "success or failure"
Aug 21 20:28:55.314: INFO: Pod "pod-configmaps-0104cfb7-8b2c-43c8-a73f-1e957173f3da": Phase="Pending", Reason="", readiness=false. Elapsed: 298.72554ms
Aug 21 20:28:57.329: INFO: Pod "pod-configmaps-0104cfb7-8b2c-43c8-a73f-1e957173f3da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.313724008s
Aug 21 20:28:59.334: INFO: Pod "pod-configmaps-0104cfb7-8b2c-43c8-a73f-1e957173f3da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.31788997s
STEP: Saw pod success
Aug 21 20:28:59.334: INFO: Pod "pod-configmaps-0104cfb7-8b2c-43c8-a73f-1e957173f3da" satisfied condition "success or failure"
Aug 21 20:28:59.336: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-0104cfb7-8b2c-43c8-a73f-1e957173f3da container configmap-volume-test: 
STEP: delete the pod
Aug 21 20:28:59.508: INFO: Waiting for pod pod-configmaps-0104cfb7-8b2c-43c8-a73f-1e957173f3da to disappear
Aug 21 20:28:59.517: INFO: Pod pod-configmaps-0104cfb7-8b2c-43c8-a73f-1e957173f3da no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:28:59.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7734" for this suite.
Aug 21 20:29:05.587: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:29:05.655: INFO: namespace configmap-7734 deletion completed in 6.133494262s

• [SLOW TEST:11.167 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:29:05.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Aug 21 20:29:05.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Aug 21 20:29:05.860: INFO: stderr: ""
Aug 21 20:29:05.860: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:29:05.860: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4069" for this suite.
Aug 21 20:29:11.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:29:11.947: INFO: namespace kubectl-4069 deletion completed in 6.075495587s

• [SLOW TEST:6.292 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:29:11.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Aug 21 20:29:12.028: INFO: Waiting up to 5m0s for pod "pod-f1f93789-9af6-4436-b986-27e1fe4a3fe5" in namespace "emptydir-4336" to be "success or failure"
Aug 21 20:29:12.033: INFO: Pod "pod-f1f93789-9af6-4436-b986-27e1fe4a3fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.283134ms
Aug 21 20:29:14.036: INFO: Pod "pod-f1f93789-9af6-4436-b986-27e1fe4a3fe5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007444123s
Aug 21 20:29:16.039: INFO: Pod "pod-f1f93789-9af6-4436-b986-27e1fe4a3fe5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010247449s
STEP: Saw pod success
Aug 21 20:29:16.039: INFO: Pod "pod-f1f93789-9af6-4436-b986-27e1fe4a3fe5" satisfied condition "success or failure"
Aug 21 20:29:16.041: INFO: Trying to get logs from node iruya-worker pod pod-f1f93789-9af6-4436-b986-27e1fe4a3fe5 container test-container: 
STEP: delete the pod
Aug 21 20:29:16.100: INFO: Waiting for pod pod-f1f93789-9af6-4436-b986-27e1fe4a3fe5 to disappear
Aug 21 20:29:16.117: INFO: Pod pod-f1f93789-9af6-4436-b986-27e1fe4a3fe5 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:29:16.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4336" for this suite.
Aug 21 20:29:22.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:29:22.233: INFO: namespace emptydir-4336 deletion completed in 6.102622546s

• [SLOW TEST:10.286 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:29:22.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Aug 21 20:29:22.319: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9205,SelfLink:/api/v1/namespaces/watch-9205/configmaps/e2e-watch-test-configmap-a,UID:5d24c65f-86b9-4ead-99c5-edf0447ffea1,ResourceVersion:1636379,Generation:0,CreationTimestamp:2020-08-21 20:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 21 20:29:22.319: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9205,SelfLink:/api/v1/namespaces/watch-9205/configmaps/e2e-watch-test-configmap-a,UID:5d24c65f-86b9-4ead-99c5-edf0447ffea1,ResourceVersion:1636379,Generation:0,CreationTimestamp:2020-08-21 20:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Aug 21 20:29:32.325: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9205,SelfLink:/api/v1/namespaces/watch-9205/configmaps/e2e-watch-test-configmap-a,UID:5d24c65f-86b9-4ead-99c5-edf0447ffea1,ResourceVersion:1636399,Generation:0,CreationTimestamp:2020-08-21 20:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Aug 21 20:29:32.325: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9205,SelfLink:/api/v1/namespaces/watch-9205/configmaps/e2e-watch-test-configmap-a,UID:5d24c65f-86b9-4ead-99c5-edf0447ffea1,ResourceVersion:1636399,Generation:0,CreationTimestamp:2020-08-21 20:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Aug 21 20:29:42.332: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9205,SelfLink:/api/v1/namespaces/watch-9205/configmaps/e2e-watch-test-configmap-a,UID:5d24c65f-86b9-4ead-99c5-edf0447ffea1,ResourceVersion:1636421,Generation:0,CreationTimestamp:2020-08-21 20:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 21 20:29:42.333: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9205,SelfLink:/api/v1/namespaces/watch-9205/configmaps/e2e-watch-test-configmap-a,UID:5d24c65f-86b9-4ead-99c5-edf0447ffea1,ResourceVersion:1636421,Generation:0,CreationTimestamp:2020-08-21 20:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Aug 21 20:29:52.338: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9205,SelfLink:/api/v1/namespaces/watch-9205/configmaps/e2e-watch-test-configmap-a,UID:5d24c65f-86b9-4ead-99c5-edf0447ffea1,ResourceVersion:1636442,Generation:0,CreationTimestamp:2020-08-21 20:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Aug 21 20:29:52.338: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9205,SelfLink:/api/v1/namespaces/watch-9205/configmaps/e2e-watch-test-configmap-a,UID:5d24c65f-86b9-4ead-99c5-edf0447ffea1,ResourceVersion:1636442,Generation:0,CreationTimestamp:2020-08-21 20:29:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Aug 21 20:30:02.344: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9205,SelfLink:/api/v1/namespaces/watch-9205/configmaps/e2e-watch-test-configmap-b,UID:656e6081-68fd-4e7e-b638-c41b74ef7d6e,ResourceVersion:1636462,Generation:0,CreationTimestamp:2020-08-21 20:30:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 21 20:30:02.344: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9205,SelfLink:/api/v1/namespaces/watch-9205/configmaps/e2e-watch-test-configmap-b,UID:656e6081-68fd-4e7e-b638-c41b74ef7d6e,ResourceVersion:1636462,Generation:0,CreationTimestamp:2020-08-21 20:30:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Aug 21 20:30:12.350: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9205,SelfLink:/api/v1/namespaces/watch-9205/configmaps/e2e-watch-test-configmap-b,UID:656e6081-68fd-4e7e-b638-c41b74ef7d6e,ResourceVersion:1636483,Generation:0,CreationTimestamp:2020-08-21 20:30:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Aug 21 20:30:12.350: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9205,SelfLink:/api/v1/namespaces/watch-9205/configmaps/e2e-watch-test-configmap-b,UID:656e6081-68fd-4e7e-b638-c41b74ef7d6e,ResourceVersion:1636483,Generation:0,CreationTimestamp:2020-08-21 20:30:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:30:22.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9205" for this suite.
Aug 21 20:30:28.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:30:28.425: INFO: namespace watch-9205 deletion completed in 6.07166168s

• [SLOW TEST:66.191 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:30:28.425: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 21 20:30:28.520: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:30:29.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-8745" for this suite.
Aug 21 20:30:35.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:30:35.651: INFO: namespace custom-resource-definition-8745 deletion completed in 6.074228462s

• [SLOW TEST:7.226 seconds]
[sig-api-machinery] CustomResourceDefinition resources
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35
    creating/deleting custom resource definition objects works  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:30:35.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-af4740a2-b82e-4862-809a-f8281c2f614d in namespace container-probe-1473
Aug 21 20:30:39.761: INFO: Started pod test-webserver-af4740a2-b82e-4862-809a-f8281c2f614d in namespace container-probe-1473
STEP: checking the pod's current state and verifying that restartCount is present
Aug 21 20:30:39.764: INFO: Initial restart count of pod test-webserver-af4740a2-b82e-4862-809a-f8281c2f614d is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:34:41.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1473" for this suite.
Aug 21 20:34:47.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:34:48.031: INFO: namespace container-probe-1473 deletion completed in 6.333873926s

• [SLOW TEST:252.379 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:34:48.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Aug 21 20:34:48.152: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Aug 21 20:34:48.163: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:34:48.167: INFO: Number of nodes with available pods: 0
Aug 21 20:34:48.167: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:34:49.172: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:34:49.175: INFO: Number of nodes with available pods: 0
Aug 21 20:34:49.175: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:34:50.428: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:34:50.739: INFO: Number of nodes with available pods: 0
Aug 21 20:34:50.739: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:34:51.173: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:34:51.177: INFO: Number of nodes with available pods: 0
Aug 21 20:34:51.177: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:34:52.362: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:34:52.673: INFO: Number of nodes with available pods: 0
Aug 21 20:34:52.673: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:34:53.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:34:53.211: INFO: Number of nodes with available pods: 0
Aug 21 20:34:53.211: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:34:54.173: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:34:54.177: INFO: Number of nodes with available pods: 1
Aug 21 20:34:54.177: INFO: Node iruya-worker2 is running more than one daemon pod
Aug 21 20:34:57.875: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:34:58.177: INFO: Number of nodes with available pods: 2
Aug 21 20:34:58.177: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Aug 21 20:34:58.387: INFO: Wrong image for pod: daemon-set-cn8w2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:34:58.387: INFO: Wrong image for pod: daemon-set-cthf5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:34:58.419: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:34:59.424: INFO: Wrong image for pod: daemon-set-cn8w2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:34:59.424: INFO: Wrong image for pod: daemon-set-cthf5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:34:59.428: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:00.423: INFO: Wrong image for pod: daemon-set-cn8w2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:35:00.423: INFO: Wrong image for pod: daemon-set-cthf5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:35:00.423: INFO: Pod daemon-set-cthf5 is not available
Aug 21 20:35:00.427: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:01.425: INFO: Wrong image for pod: daemon-set-cn8w2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:35:01.425: INFO: Wrong image for pod: daemon-set-cthf5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:35:01.425: INFO: Pod daemon-set-cthf5 is not available
Aug 21 20:35:01.429: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:02.607: INFO: Wrong image for pod: daemon-set-cn8w2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:35:02.607: INFO: Wrong image for pod: daemon-set-cthf5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:35:02.607: INFO: Pod daemon-set-cthf5 is not available
Aug 21 20:35:02.639: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:03.423: INFO: Wrong image for pod: daemon-set-cn8w2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:35:03.423: INFO: Wrong image for pod: daemon-set-cthf5. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:35:03.423: INFO: Pod daemon-set-cthf5 is not available
Aug 21 20:35:03.427: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:04.423: INFO: Wrong image for pod: daemon-set-cn8w2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:35:04.423: INFO: Pod daemon-set-x7k4t is not available
Aug 21 20:35:04.427: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:05.423: INFO: Wrong image for pod: daemon-set-cn8w2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:35:05.423: INFO: Pod daemon-set-x7k4t is not available
Aug 21 20:35:05.427: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:06.423: INFO: Wrong image for pod: daemon-set-cn8w2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:35:06.423: INFO: Pod daemon-set-x7k4t is not available
Aug 21 20:35:06.425: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:07.422: INFO: Wrong image for pod: daemon-set-cn8w2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:35:07.422: INFO: Pod daemon-set-x7k4t is not available
Aug 21 20:35:07.426: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:09.380: INFO: Wrong image for pod: daemon-set-cn8w2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:35:09.614: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:10.425: INFO: Wrong image for pod: daemon-set-cn8w2. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Aug 21 20:35:10.425: INFO: Pod daemon-set-cn8w2 is not available
Aug 21 20:35:10.462: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:11.425: INFO: Pod daemon-set-x77qx is not available
Aug 21 20:35:11.445: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
STEP: Check that daemon pods are still running on every node of the cluster.
Aug 21 20:35:11.449: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:11.455: INFO: Number of nodes with available pods: 1
Aug 21 20:35:11.455: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:35:12.634: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:12.638: INFO: Number of nodes with available pods: 1
Aug 21 20:35:12.638: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:35:13.460: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:13.467: INFO: Number of nodes with available pods: 1
Aug 21 20:35:13.467: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:35:14.461: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:14.464: INFO: Number of nodes with available pods: 1
Aug 21 20:35:14.464: INFO: Node iruya-worker is running more than one daemon pod
Aug 21 20:35:15.585: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Aug 21 20:35:15.661: INFO: Number of nodes with available pods: 2
Aug 21 20:35:15.661: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-673, will wait for the garbage collector to delete the pods
Aug 21 20:35:15.743: INFO: Deleting DaemonSet.extensions daemon-set took: 5.314541ms
Aug 21 20:35:16.043: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.279679ms
Aug 21 20:35:23.655: INFO: Number of nodes with available pods: 0
Aug 21 20:35:23.655: INFO: Number of running nodes: 0, number of available pods: 0
Aug 21 20:35:23.657: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-673/daemonsets","resourceVersion":"1637213"},"items":null}

Aug 21 20:35:23.659: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-673/pods","resourceVersion":"1637213"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:35:23.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-673" for this suite.
Aug 21 20:35:31.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:35:31.750: INFO: namespace daemonsets-673 deletion completed in 8.08206675s

• [SLOW TEST:43.719 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:35:31.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3504
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Aug 21 20:35:31.828: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Aug 21 20:35:54.009: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.42:8080/dial?request=hostName&protocol=http&host=10.244.1.41&port=8080&tries=1'] Namespace:pod-network-test-3504 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 20:35:54.009: INFO: >>> kubeConfig: /root/.kube/config
I0821 20:35:54.045402       6 log.go:172] (0xc0004146e0) (0xc00147ec80) Create stream
I0821 20:35:54.045440       6 log.go:172] (0xc0004146e0) (0xc00147ec80) Stream added, broadcasting: 1
I0821 20:35:54.049647       6 log.go:172] (0xc0004146e0) Reply frame received for 1
I0821 20:35:54.049709       6 log.go:172] (0xc0004146e0) (0xc00147edc0) Create stream
I0821 20:35:54.049726       6 log.go:172] (0xc0004146e0) (0xc00147edc0) Stream added, broadcasting: 3
I0821 20:35:54.050705       6 log.go:172] (0xc0004146e0) Reply frame received for 3
I0821 20:35:54.050745       6 log.go:172] (0xc0004146e0) (0xc0022e0320) Create stream
I0821 20:35:54.050758       6 log.go:172] (0xc0004146e0) (0xc0022e0320) Stream added, broadcasting: 5
I0821 20:35:54.051628       6 log.go:172] (0xc0004146e0) Reply frame received for 5
I0821 20:35:54.135274       6 log.go:172] (0xc0004146e0) Data frame received for 3
I0821 20:35:54.135305       6 log.go:172] (0xc00147edc0) (3) Data frame handling
I0821 20:35:54.135320       6 log.go:172] (0xc00147edc0) (3) Data frame sent
I0821 20:35:54.135959       6 log.go:172] (0xc0004146e0) Data frame received for 5
I0821 20:35:54.135986       6 log.go:172] (0xc0022e0320) (5) Data frame handling
I0821 20:35:54.136027       6 log.go:172] (0xc0004146e0) Data frame received for 3
I0821 20:35:54.136045       6 log.go:172] (0xc00147edc0) (3) Data frame handling
I0821 20:35:54.137961       6 log.go:172] (0xc0004146e0) Data frame received for 1
I0821 20:35:54.137992       6 log.go:172] (0xc00147ec80) (1) Data frame handling
I0821 20:35:54.138020       6 log.go:172] (0xc00147ec80) (1) Data frame sent
I0821 20:35:54.138046       6 log.go:172] (0xc0004146e0) (0xc00147ec80) Stream removed, broadcasting: 1
I0821 20:35:54.138065       6 log.go:172] (0xc0004146e0) Go away received
I0821 20:35:54.138230       6 log.go:172] (0xc0004146e0) (0xc00147ec80) Stream removed, broadcasting: 1
I0821 20:35:54.138263       6 log.go:172] (0xc0004146e0) (0xc00147edc0) Stream removed, broadcasting: 3
I0821 20:35:54.138277       6 log.go:172] (0xc0004146e0) (0xc0022e0320) Stream removed, broadcasting: 5
Aug 21 20:35:54.138: INFO: Waiting for endpoints: map[]
Aug 21 20:35:54.141: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.42:8080/dial?request=hostName&protocol=http&host=10.244.2.101&port=8080&tries=1'] Namespace:pod-network-test-3504 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Aug 21 20:35:54.141: INFO: >>> kubeConfig: /root/.kube/config
I0821 20:35:54.175245       6 log.go:172] (0xc0006b5970) (0xc00054f180) Create stream
I0821 20:35:54.175279       6 log.go:172] (0xc0006b5970) (0xc00054f180) Stream added, broadcasting: 1
I0821 20:35:54.178332       6 log.go:172] (0xc0006b5970) Reply frame received for 1
I0821 20:35:54.178381       6 log.go:172] (0xc0006b5970) (0xc0022e03c0) Create stream
I0821 20:35:54.178396       6 log.go:172] (0xc0006b5970) (0xc0022e03c0) Stream added, broadcasting: 3
I0821 20:35:54.179301       6 log.go:172] (0xc0006b5970) Reply frame received for 3
I0821 20:35:54.179329       6 log.go:172] (0xc0006b5970) (0xc0022e0460) Create stream
I0821 20:35:54.179336       6 log.go:172] (0xc0006b5970) (0xc0022e0460) Stream added, broadcasting: 5
I0821 20:35:54.180536       6 log.go:172] (0xc0006b5970) Reply frame received for 5
I0821 20:35:54.253527       6 log.go:172] (0xc0006b5970) Data frame received for 3
I0821 20:35:54.253546       6 log.go:172] (0xc0022e03c0) (3) Data frame handling
I0821 20:35:54.253568       6 log.go:172] (0xc0022e03c0) (3) Data frame sent
I0821 20:35:54.253947       6 log.go:172] (0xc0006b5970) Data frame received for 3
I0821 20:35:54.253984       6 log.go:172] (0xc0022e03c0) (3) Data frame handling
I0821 20:35:54.254005       6 log.go:172] (0xc0006b5970) Data frame received for 5
I0821 20:35:54.254019       6 log.go:172] (0xc0022e0460) (5) Data frame handling
I0821 20:35:54.255695       6 log.go:172] (0xc0006b5970) Data frame received for 1
I0821 20:35:54.255715       6 log.go:172] (0xc00054f180) (1) Data frame handling
I0821 20:35:54.255721       6 log.go:172] (0xc00054f180) (1) Data frame sent
I0821 20:35:54.255730       6 log.go:172] (0xc0006b5970) (0xc00054f180) Stream removed, broadcasting: 1
I0821 20:35:54.255747       6 log.go:172] (0xc0006b5970) Go away received
I0821 20:35:54.255798       6 log.go:172] (0xc0006b5970) (0xc00054f180) Stream removed, broadcasting: 1
I0821 20:35:54.255817       6 log.go:172] (0xc0006b5970) (0xc0022e03c0) Stream removed, broadcasting: 3
I0821 20:35:54.255827       6 log.go:172] (0xc0006b5970) (0xc0022e0460) Stream removed, broadcasting: 5
Aug 21 20:35:54.255: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:35:54.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3504" for this suite.
Aug 21 20:36:18.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:36:18.379: INFO: namespace pod-network-test-3504 deletion completed in 24.11906222s

• [SLOW TEST:46.628 seconds]
[sig-network] Networking
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:36:18.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Aug 21 20:36:23.045: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9234 pod-service-account-97280559-c583-463d-bf64-aeed92bc1427 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Aug 21 20:36:23.348: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9234 pod-service-account-97280559-c583-463d-bf64-aeed92bc1427 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Aug 21 20:36:23.570: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9234 pod-service-account-97280559-c583-463d-bf64-aeed92bc1427 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:36:23.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9234" for this suite.
Aug 21 20:36:29.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:36:29.882: INFO: namespace svcaccounts-9234 deletion completed in 6.116758023s

• [SLOW TEST:11.503 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:36:29.883: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-2481180d-e9a4-4cc1-bac0-6540a78a25cb
STEP: Creating a pod to test consume configMaps
Aug 21 20:36:29.972: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-770f9505-93e5-408b-a3fe-3bbf868bf7a2" in namespace "projected-8401" to be "success or failure"
Aug 21 20:36:29.982: INFO: Pod "pod-projected-configmaps-770f9505-93e5-408b-a3fe-3bbf868bf7a2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.670896ms
Aug 21 20:36:32.129: INFO: Pod "pod-projected-configmaps-770f9505-93e5-408b-a3fe-3bbf868bf7a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.156959797s
Aug 21 20:36:34.133: INFO: Pod "pod-projected-configmaps-770f9505-93e5-408b-a3fe-3bbf868bf7a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.160935735s
STEP: Saw pod success
Aug 21 20:36:34.133: INFO: Pod "pod-projected-configmaps-770f9505-93e5-408b-a3fe-3bbf868bf7a2" satisfied condition "success or failure"
Aug 21 20:36:34.135: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-770f9505-93e5-408b-a3fe-3bbf868bf7a2 container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 20:36:34.196: INFO: Waiting for pod pod-projected-configmaps-770f9505-93e5-408b-a3fe-3bbf868bf7a2 to disappear
Aug 21 20:36:34.209: INFO: Pod pod-projected-configmaps-770f9505-93e5-408b-a3fe-3bbf868bf7a2 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:36:34.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8401" for this suite.
Aug 21 20:36:40.225: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:36:40.306: INFO: namespace projected-8401 deletion completed in 6.093305526s

• [SLOW TEST:10.424 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:36:40.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Aug 21 20:36:48.939: INFO: Successfully updated pod "annotationupdateaff81cf3-5b99-4cce-94c9-36372953e95e"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:36:50.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6948" for this suite.
Aug 21 20:37:12.981: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:37:13.105: INFO: namespace downward-api-6948 deletion completed in 22.140664219s

• [SLOW TEST:32.799 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:37:13.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-3c533313-40e0-459c-be53-cf8597844ec0
STEP: Creating a pod to test consume configMaps
Aug 21 20:37:13.443: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ab2cf0f3-7236-4620-af50-648c396e45ba" in namespace "projected-1092" to be "success or failure"
Aug 21 20:37:13.455: INFO: Pod "pod-projected-configmaps-ab2cf0f3-7236-4620-af50-648c396e45ba": Phase="Pending", Reason="", readiness=false. Elapsed: 12.019685ms
Aug 21 20:37:15.573: INFO: Pod "pod-projected-configmaps-ab2cf0f3-7236-4620-af50-648c396e45ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129938057s
Aug 21 20:37:17.969: INFO: Pod "pod-projected-configmaps-ab2cf0f3-7236-4620-af50-648c396e45ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.525667468s
STEP: Saw pod success
Aug 21 20:37:17.969: INFO: Pod "pod-projected-configmaps-ab2cf0f3-7236-4620-af50-648c396e45ba" satisfied condition "success or failure"
Aug 21 20:37:17.976: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-ab2cf0f3-7236-4620-af50-648c396e45ba container projected-configmap-volume-test: 
STEP: delete the pod
Aug 21 20:37:18.063: INFO: Waiting for pod pod-projected-configmaps-ab2cf0f3-7236-4620-af50-648c396e45ba to disappear
Aug 21 20:37:18.123: INFO: Pod pod-projected-configmaps-ab2cf0f3-7236-4620-af50-648c396e45ba no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:37:18.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1092" for this suite.
Aug 21 20:37:24.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:37:24.230: INFO: namespace projected-1092 deletion completed in 6.103160279s

• [SLOW TEST:11.125 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:37:24.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-6ce4f8a7-589b-4de8-82f4-4e4b0a828079
STEP: Creating a pod to test consume configMaps
Aug 21 20:37:24.382: INFO: Waiting up to 5m0s for pod "pod-configmaps-fb6be475-0c66-40e1-8945-046f560c70af" in namespace "configmap-9973" to be "success or failure"
Aug 21 20:37:24.391: INFO: Pod "pod-configmaps-fb6be475-0c66-40e1-8945-046f560c70af": Phase="Pending", Reason="", readiness=false. Elapsed: 8.750119ms
Aug 21 20:37:26.483: INFO: Pod "pod-configmaps-fb6be475-0c66-40e1-8945-046f560c70af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100448781s
Aug 21 20:37:28.487: INFO: Pod "pod-configmaps-fb6be475-0c66-40e1-8945-046f560c70af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104818035s
STEP: Saw pod success
Aug 21 20:37:28.487: INFO: Pod "pod-configmaps-fb6be475-0c66-40e1-8945-046f560c70af" satisfied condition "success or failure"
Aug 21 20:37:28.490: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-fb6be475-0c66-40e1-8945-046f560c70af container configmap-volume-test: 
STEP: delete the pod
Aug 21 20:37:28.523: INFO: Waiting for pod pod-configmaps-fb6be475-0c66-40e1-8945-046f560c70af to disappear
Aug 21 20:37:28.535: INFO: Pod pod-configmaps-fb6be475-0c66-40e1-8945-046f560c70af no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:37:28.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9973" for this suite.
Aug 21 20:37:34.575: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:37:34.648: INFO: namespace configmap-9973 deletion completed in 6.110448648s

• [SLOW TEST:10.417 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:37:34.648: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Aug 21 20:37:39.753: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:37:40.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4720" for this suite.
Aug 21 20:38:02.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:38:02.899: INFO: namespace replicaset-4720 deletion completed in 22.121889826s

• [SLOW TEST:28.251 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Guestbook application 
  should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:38:02.900: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create and stop a working application  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating all guestbook components
Aug 21 20:38:02.981: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Aug 21 20:38:02.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9647'
Aug 21 20:38:05.791: INFO: stderr: ""
Aug 21 20:38:05.791: INFO: stdout: "service/redis-slave created\n"
Aug 21 20:38:05.792: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Aug 21 20:38:05.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9647'
Aug 21 20:38:06.084: INFO: stderr: ""
Aug 21 20:38:06.084: INFO: stdout: "service/redis-master created\n"
Aug 21 20:38:06.084: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Aug 21 20:38:06.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9647'
Aug 21 20:38:06.439: INFO: stderr: ""
Aug 21 20:38:06.439: INFO: stdout: "service/frontend created\n"
Aug 21 20:38:06.439: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Aug 21 20:38:06.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9647'
Aug 21 20:38:06.724: INFO: stderr: ""
Aug 21 20:38:06.724: INFO: stdout: "deployment.apps/frontend created\n"
Aug 21 20:38:06.725: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/redis:1.0
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Aug 21 20:38:06.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9647'
Aug 21 20:38:07.006: INFO: stderr: ""
Aug 21 20:38:07.006: INFO: stdout: "deployment.apps/redis-master created\n"
Aug 21 20:38:07.007: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/google-samples/gb-redisslave:v3
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Aug 21 20:38:07.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9647'
Aug 21 20:38:07.349: INFO: stderr: ""
Aug 21 20:38:07.349: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Aug 21 20:38:07.349: INFO: Waiting for all frontend pods to be Running.
Aug 21 20:38:17.399: INFO: Waiting for frontend to serve content.
Aug 21 20:38:17.417: INFO: Trying to add a new entry to the guestbook.
Aug 21 20:38:17.434: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Aug 21 20:38:17.449: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9647'
Aug 21 20:38:17.589: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 20:38:17.589: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 20:38:17.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9647'
Aug 21 20:38:17.757: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 20:38:17.757: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 20:38:17.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9647'
Aug 21 20:38:17.915: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 20:38:17.915: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 20:38:17.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9647'
Aug 21 20:38:18.026: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 20:38:18.026: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 20:38:18.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9647'
Aug 21 20:38:18.178: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 20:38:18.178: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Aug 21 20:38:18.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9647'
Aug 21 20:38:18.358: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Aug 21 20:38:18.358: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:38:18.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9647" for this suite.
Aug 21 20:39:08.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:39:08.560: INFO: namespace kubectl-9647 deletion completed in 50.11886264s

• [SLOW TEST:65.660 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Guestbook application
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a working application  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:39:08.561: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:39:08.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6138" for this suite.
Aug 21 20:39:14.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:39:14.797: INFO: namespace services-6138 deletion completed in 6.126726031s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.236 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Aug 21 20:39:14.798: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Aug 21 20:39:14.832: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Aug 21 20:39:15.209: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Aug 21 20:39:17.573: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733639155, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733639155, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733639155, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733639155, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 20:39:19.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733639155, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733639155, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63733639155, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63733639155, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Aug 21 20:39:22.201: INFO: Waited 619.071259ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Aug 21 20:39:22.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-830" for this suite.
Aug 21 20:39:30.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Aug 21 20:39:30.928: INFO: namespace aggregator-830 deletion completed in 8.287794313s

• [SLOW TEST:16.130 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SAug 21 20:39:30.928: INFO: Running AfterSuite actions on all nodes
Aug 21 20:39:30.928: INFO: Running AfterSuite actions on node 1
Aug 21 20:39:30.928: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 6504.753 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS