I0619 12:55:52.058955 6 e2e.go:243] Starting e2e run "5d4a0555-759e-4286-b1eb-4cf6f98383c4" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1592571351 - Will randomize all specs Will run 215 of 4412 specs Jun 19 12:55:52.248: INFO: >>> kubeConfig: /root/.kube/config Jun 19 12:55:52.250: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jun 19 12:55:52.274: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jun 19 12:55:52.308: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jun 19 12:55:52.308: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jun 19 12:55:52.308: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jun 19 12:55:52.319: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Jun 19 12:55:52.319: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jun 19 12:55:52.319: INFO: e2e test version: v1.15.11 Jun 19 12:55:52.320: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 12:55:52.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir Jun 19 12:55:52.382: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 19 12:55:52.390: INFO: Waiting up to 5m0s for pod "pod-f4a3c0c0-908a-49b1-b979-57804ad2d49c" in namespace "emptydir-73" to be "success or failure" Jun 19 12:55:52.411: INFO: Pod "pod-f4a3c0c0-908a-49b1-b979-57804ad2d49c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.165919ms Jun 19 12:55:54.415: INFO: Pod "pod-f4a3c0c0-908a-49b1-b979-57804ad2d49c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025472573s Jun 19 12:55:56.420: INFO: Pod "pod-f4a3c0c0-908a-49b1-b979-57804ad2d49c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030322294s STEP: Saw pod success Jun 19 12:55:56.420: INFO: Pod "pod-f4a3c0c0-908a-49b1-b979-57804ad2d49c" satisfied condition "success or failure" Jun 19 12:55:56.423: INFO: Trying to get logs from node iruya-worker2 pod pod-f4a3c0c0-908a-49b1-b979-57804ad2d49c container test-container: STEP: delete the pod Jun 19 12:55:56.446: INFO: Waiting for pod pod-f4a3c0c0-908a-49b1-b979-57804ad2d49c to disappear Jun 19 12:55:56.471: INFO: Pod pod-f4a3c0c0-908a-49b1-b979-57804ad2d49c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 12:55:56.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-73" for this suite. Jun 19 12:56:02.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 12:56:02.634: INFO: namespace emptydir-73 deletion completed in 6.112251494s • [SLOW TEST:10.313 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 12:56:02.634: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-67babce4-7919-468f-809b-7eb267e29597 STEP: Creating a pod to test consume configMaps Jun 19 12:56:02.710: INFO: Waiting up to 5m0s for pod "pod-configmaps-64181590-3ea2-48d9-a5d1-a0b39219f6c5" in namespace "configmap-9840" to be "success or failure" Jun 19 12:56:02.714: INFO: Pod "pod-configmaps-64181590-3ea2-48d9-a5d1-a0b39219f6c5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.948471ms Jun 19 12:56:04.719: INFO: Pod "pod-configmaps-64181590-3ea2-48d9-a5d1-a0b39219f6c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00860574s Jun 19 12:56:06.723: INFO: Pod "pod-configmaps-64181590-3ea2-48d9-a5d1-a0b39219f6c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012866028s STEP: Saw pod success Jun 19 12:56:06.723: INFO: Pod "pod-configmaps-64181590-3ea2-48d9-a5d1-a0b39219f6c5" satisfied condition "success or failure" Jun 19 12:56:06.727: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-64181590-3ea2-48d9-a5d1-a0b39219f6c5 container configmap-volume-test: STEP: delete the pod Jun 19 12:56:06.769: INFO: Waiting for pod pod-configmaps-64181590-3ea2-48d9-a5d1-a0b39219f6c5 to disappear Jun 19 12:56:06.775: INFO: Pod pod-configmaps-64181590-3ea2-48d9-a5d1-a0b39219f6c5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 12:56:06.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9840" for this suite. Jun 19 12:56:12.791: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 12:56:12.872: INFO: namespace configmap-9840 deletion completed in 6.09414445s • [SLOW TEST:10.237 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 12:56:12.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 19 12:56:15.962: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 12:56:16.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4664" for this suite. Jun 19 12:56:22.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 12:56:22.321: INFO: namespace container-runtime-4664 deletion completed in 6.108133297s • [SLOW TEST:9.449 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 12:56:22.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 12:56:22.360: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d73ab3c-96f1-4640-9638-4088e46333dd" in namespace "projected-4781" to be "success or failure" Jun 19 12:56:22.401: INFO: Pod "downwardapi-volume-4d73ab3c-96f1-4640-9638-4088e46333dd": Phase="Pending", Reason="", readiness=false. Elapsed: 40.932487ms Jun 19 12:56:24.406: INFO: Pod "downwardapi-volume-4d73ab3c-96f1-4640-9638-4088e46333dd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045672075s Jun 19 12:56:26.411: INFO: Pod "downwardapi-volume-4d73ab3c-96f1-4640-9638-4088e46333dd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050233876s STEP: Saw pod success Jun 19 12:56:26.411: INFO: Pod "downwardapi-volume-4d73ab3c-96f1-4640-9638-4088e46333dd" satisfied condition "success or failure" Jun 19 12:56:26.414: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-4d73ab3c-96f1-4640-9638-4088e46333dd container client-container: STEP: delete the pod Jun 19 12:56:26.469: INFO: Waiting for pod downwardapi-volume-4d73ab3c-96f1-4640-9638-4088e46333dd to disappear Jun 19 12:56:26.479: INFO: Pod downwardapi-volume-4d73ab3c-96f1-4640-9638-4088e46333dd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 12:56:26.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4781" for this suite. Jun 19 12:56:32.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 12:56:32.570: INFO: namespace projected-4781 deletion completed in 6.088526483s • [SLOW TEST:10.249 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 12:56:32.571: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-5a348998-d6c7-41a1-a496-04f3802a0323 STEP: Creating a pod to test consume secrets Jun 19 12:56:32.645: INFO: Waiting up to 5m0s for pod "pod-secrets-cd14c0dc-68b3-4389-b7f4-58293f98a98f" in namespace "secrets-6858" to be "success or failure" Jun 19 12:56:32.676: INFO: Pod "pod-secrets-cd14c0dc-68b3-4389-b7f4-58293f98a98f": Phase="Pending", Reason="", readiness=false. Elapsed: 30.77592ms Jun 19 12:56:34.681: INFO: Pod "pod-secrets-cd14c0dc-68b3-4389-b7f4-58293f98a98f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035503153s Jun 19 12:56:36.685: INFO: Pod "pod-secrets-cd14c0dc-68b3-4389-b7f4-58293f98a98f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039475601s STEP: Saw pod success Jun 19 12:56:36.685: INFO: Pod "pod-secrets-cd14c0dc-68b3-4389-b7f4-58293f98a98f" satisfied condition "success or failure" Jun 19 12:56:36.688: INFO: Trying to get logs from node iruya-worker pod pod-secrets-cd14c0dc-68b3-4389-b7f4-58293f98a98f container secret-volume-test: STEP: delete the pod Jun 19 12:56:36.722: INFO: Waiting for pod pod-secrets-cd14c0dc-68b3-4389-b7f4-58293f98a98f to disappear Jun 19 12:56:36.734: INFO: Pod pod-secrets-cd14c0dc-68b3-4389-b7f4-58293f98a98f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 12:56:36.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6858" for this suite. Jun 19 12:56:42.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 12:56:42.826: INFO: namespace secrets-6858 deletion completed in 6.089663582s • [SLOW TEST:10.255 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 12:56:42.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2268/configmap-test-a6a60e74-f89b-4c16-9a6c-497daaf07d30 STEP: Creating a pod to test consume configMaps Jun 19 12:56:42.897: INFO: Waiting up to 5m0s for pod "pod-configmaps-3e8602b4-1e1f-4ce0-8f7d-5e3ce7749c1d" in namespace "configmap-2268" to be "success or failure" Jun 19 12:56:42.916: INFO: Pod "pod-configmaps-3e8602b4-1e1f-4ce0-8f7d-5e3ce7749c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 18.706303ms Jun 19 12:56:44.920: INFO: Pod "pod-configmaps-3e8602b4-1e1f-4ce0-8f7d-5e3ce7749c1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022401564s Jun 19 12:56:46.925: INFO: Pod "pod-configmaps-3e8602b4-1e1f-4ce0-8f7d-5e3ce7749c1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027115536s STEP: Saw pod success Jun 19 12:56:46.925: INFO: Pod "pod-configmaps-3e8602b4-1e1f-4ce0-8f7d-5e3ce7749c1d" satisfied condition "success or failure" Jun 19 12:56:46.928: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-3e8602b4-1e1f-4ce0-8f7d-5e3ce7749c1d container env-test: STEP: delete the pod Jun 19 12:56:46.951: INFO: Waiting for pod pod-configmaps-3e8602b4-1e1f-4ce0-8f7d-5e3ce7749c1d to disappear Jun 19 12:56:46.955: INFO: Pod pod-configmaps-3e8602b4-1e1f-4ce0-8f7d-5e3ce7749c1d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 12:56:46.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2268" for this suite. Jun 19 12:56:52.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 12:56:53.066: INFO: namespace configmap-2268 deletion completed in 6.10808861s • [SLOW TEST:10.240 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 12:56:53.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 19 12:56:57.674: INFO: Successfully updated pod "pod-update-a15efa3e-b43e-4559-980e-4aef56c3b5a2" STEP: verifying the updated pod is in kubernetes Jun 19 12:56:57.698: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 12:56:57.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6353" for this suite. Jun 19 12:57:19.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 12:57:19.817: INFO: namespace pods-6353 deletion completed in 22.115523106s • [SLOW TEST:26.750 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 12:57:19.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 12:57:19.880: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7e94b64d-ec96-4b6e-9a41-7e7e8c84c374" in namespace "downward-api-7074" to be "success or failure" Jun 19 12:57:19.883: INFO: Pod "downwardapi-volume-7e94b64d-ec96-4b6e-9a41-7e7e8c84c374": Phase="Pending", Reason="", readiness=false. Elapsed: 3.163602ms Jun 19 12:57:21.888: INFO: Pod "downwardapi-volume-7e94b64d-ec96-4b6e-9a41-7e7e8c84c374": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007315113s Jun 19 12:57:23.892: INFO: Pod "downwardapi-volume-7e94b64d-ec96-4b6e-9a41-7e7e8c84c374": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011691329s STEP: Saw pod success Jun 19 12:57:23.892: INFO: Pod "downwardapi-volume-7e94b64d-ec96-4b6e-9a41-7e7e8c84c374" satisfied condition "success or failure" Jun 19 12:57:23.895: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7e94b64d-ec96-4b6e-9a41-7e7e8c84c374 container client-container: STEP: delete the pod Jun 19 12:57:23.915: INFO: Waiting for pod downwardapi-volume-7e94b64d-ec96-4b6e-9a41-7e7e8c84c374 to disappear Jun 19 12:57:23.919: INFO: Pod downwardapi-volume-7e94b64d-ec96-4b6e-9a41-7e7e8c84c374 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 12:57:23.919: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7074" for this suite. Jun 19 12:57:29.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 12:57:30.040: INFO: namespace downward-api-7074 deletion completed in 6.098632457s • [SLOW TEST:10.223 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 12:57:30.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 19 12:57:30.126: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3495' Jun 19 12:57:32.764: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 19 12:57:32.764: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Jun 19 12:57:32.779: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Jun 19 12:57:32.788: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Jun 19 12:57:32.873: INFO: scanned /root for discovery docs: Jun 19 12:57:32.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3495' Jun 19 12:57:48.787: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 19 12:57:48.787: INFO: stdout: "Created e2e-test-nginx-rc-a43516238cbe614d5a1d76348a04566e\nScaling up e2e-test-nginx-rc-a43516238cbe614d5a1d76348a04566e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a43516238cbe614d5a1d76348a04566e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a43516238cbe614d5a1d76348a04566e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Jun 19 12:57:48.787: INFO: stdout: "Created e2e-test-nginx-rc-a43516238cbe614d5a1d76348a04566e\nScaling up e2e-test-nginx-rc-a43516238cbe614d5a1d76348a04566e from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-a43516238cbe614d5a1d76348a04566e up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-a43516238cbe614d5a1d76348a04566e to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Jun 19 12:57:48.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3495' Jun 19 12:57:48.889: INFO: stderr: "" Jun 19 12:57:48.889: INFO: stdout: "e2e-test-nginx-rc-a43516238cbe614d5a1d76348a04566e-x4qq5 e2e-test-nginx-rc-pncq5 " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Jun 19 12:57:53.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-3495' Jun 19 12:57:54.010: INFO: stderr: "" Jun 19 12:57:54.010: INFO: stdout: "e2e-test-nginx-rc-a43516238cbe614d5a1d76348a04566e-x4qq5 " Jun 19 12:57:54.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a43516238cbe614d5a1d76348a04566e-x4qq5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3495' Jun 19 12:57:54.098: INFO: stderr: "" Jun 19 12:57:54.098: INFO: stdout: "true" Jun 19 12:57:54.098: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-a43516238cbe614d5a1d76348a04566e-x4qq5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3495' Jun 19 12:57:54.183: INFO: stderr: "" Jun 19 12:57:54.184: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Jun 19 12:57:54.184: INFO: e2e-test-nginx-rc-a43516238cbe614d5a1d76348a04566e-x4qq5 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Jun 19 12:57:54.184: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3495' Jun 19 12:57:54.292: INFO: stderr: "" Jun 19 12:57:54.292: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 12:57:54.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3495" for this suite. Jun 19 12:58:16.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 12:58:16.423: INFO: namespace kubectl-3495 deletion completed in 22.114189061s • [SLOW TEST:46.382 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 12:58:16.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 12:58:16.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2688" for this suite. Jun 19 12:58:22.494: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 12:58:22.578: INFO: namespace services-2688 deletion completed in 6.09596836s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.154 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 12:58:22.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3083 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 19 12:58:22.646: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 19 12:58:48.771: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.33:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3083 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 12:58:48.771: INFO: >>> kubeConfig: /root/.kube/config I0619 12:58:48.812322 6 log.go:172] (0xc000c0b760) (0xc002c30320) Create stream I0619 12:58:48.812357 6 log.go:172] (0xc000c0b760) (0xc002c30320) Stream added, broadcasting: 1 I0619 12:58:48.814980 6 log.go:172] (0xc000c0b760) Reply frame received for 1 I0619 12:58:48.815024 6 log.go:172] (0xc000c0b760) (0xc001e92460) Create stream I0619 12:58:48.815039 6 log.go:172] (0xc000c0b760) (0xc001e92460) Stream added, broadcasting: 3 I0619 12:58:48.815981 6 log.go:172] (0xc000c0b760) Reply frame received for 3 I0619 12:58:48.816029 6 log.go:172] (0xc000c0b760) (0xc002c303c0) Create stream I0619 12:58:48.816051 6 log.go:172] (0xc000c0b760) (0xc002c303c0) Stream added, broadcasting: 5 I0619 12:58:48.816928 6 log.go:172] (0xc000c0b760) Reply frame received for 5 I0619 12:58:48.924329 6 log.go:172] (0xc000c0b760) Data frame received for 3 I0619 12:58:48.924376 6 log.go:172] (0xc001e92460) (3) Data frame handling I0619 12:58:48.924485 6 log.go:172] (0xc000c0b760) Data frame received for 5 I0619 12:58:48.924570 6 log.go:172] (0xc002c303c0) (5) Data frame handling I0619 12:58:48.924604 6 log.go:172] (0xc001e92460) (3) Data frame sent I0619 12:58:48.924640 6 log.go:172] (0xc000c0b760) Data frame received for 3 I0619 12:58:48.924659 6 log.go:172] (0xc001e92460) (3) Data frame handling I0619 12:58:48.926373 6 log.go:172] (0xc000c0b760) Data frame received for 1 I0619 12:58:48.926394 6 log.go:172] (0xc002c30320) (1) Data frame handling I0619 12:58:48.926409 6 log.go:172] (0xc002c30320) (1) Data frame sent I0619 12:58:48.926423 6 log.go:172] (0xc000c0b760) (0xc002c30320) Stream removed, broadcasting: 1 I0619 12:58:48.926445 6 log.go:172] (0xc000c0b760) Go away received I0619 12:58:48.926842 6 log.go:172] (0xc000c0b760) (0xc002c30320) Stream removed, broadcasting: 1 I0619 12:58:48.926866 6 log.go:172] (0xc000c0b760) (0xc001e92460) Stream removed, broadcasting: 3 I0619 12:58:48.926876 6 log.go:172] (0xc000c0b760) (0xc002c303c0) Stream removed, broadcasting: 5 Jun 19 12:58:48.926: INFO: Found all expected endpoints: [netserver-0] Jun 19 12:58:48.929: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.17:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3083 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 12:58:48.929: INFO: >>> kubeConfig: /root/.kube/config I0619 12:58:48.964551 6 log.go:172] (0xc00029d130) (0xc002c30780) Create stream I0619 12:58:48.964588 6 log.go:172] (0xc00029d130) (0xc002c30780) Stream added, broadcasting: 1 I0619 12:58:48.966899 6 log.go:172] (0xc00029d130) Reply frame received for 1 I0619 12:58:48.966946 6 log.go:172] (0xc00029d130) (0xc002c30820) Create stream I0619 12:58:48.966963 6 log.go:172] (0xc00029d130) (0xc002c30820) Stream added, broadcasting: 3 I0619 12:58:48.967964 6 log.go:172] (0xc00029d130) Reply frame received for 3 I0619 12:58:48.968009 6 log.go:172] (0xc00029d130) (0xc000bde1e0) Create stream I0619 12:58:48.968031 6 log.go:172] (0xc00029d130) (0xc000bde1e0) Stream added, broadcasting: 5 I0619 12:58:48.969078 6 log.go:172] (0xc00029d130) Reply frame received for 5 I0619 12:58:49.052619 6 log.go:172] (0xc00029d130) Data frame received for 3 I0619 12:58:49.052651 6 log.go:172] (0xc002c30820) (3) Data frame handling I0619 12:58:49.052676 6 log.go:172] (0xc002c30820) (3) Data frame sent I0619 12:58:49.052695 6 log.go:172] (0xc00029d130) Data frame received for 3 I0619 12:58:49.052708 6 log.go:172] (0xc002c30820) (3) Data frame handling I0619 12:58:49.053042 6 log.go:172] (0xc00029d130) Data frame received for 5 I0619 12:58:49.053062 6 log.go:172] (0xc000bde1e0) (5) Data frame handling I0619 12:58:49.054863 6 log.go:172] (0xc00029d130) Data frame received for 1 I0619 12:58:49.054900 6 log.go:172] (0xc002c30780) (1) Data frame handling I0619 12:58:49.054911 6 log.go:172] (0xc002c30780) (1) Data frame sent I0619 12:58:49.054934 6 log.go:172] (0xc00029d130) (0xc002c30780) Stream removed, broadcasting: 1 I0619 12:58:49.054972 6 log.go:172] (0xc00029d130) Go away received I0619 12:58:49.055035 6 log.go:172] (0xc00029d130) (0xc002c30780) Stream removed, broadcasting: 1 I0619 12:58:49.055061 6 log.go:172] (0xc00029d130) (0xc002c30820) Stream removed, broadcasting: 3 I0619 12:58:49.055077 6 log.go:172] (0xc00029d130) (0xc000bde1e0) Stream removed, broadcasting: 5 Jun 19 12:58:49.055: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 12:58:49.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3083" for this suite. Jun 19 12:59:11.075: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 12:59:11.182: INFO: namespace pod-network-test-3083 deletion completed in 22.122849003s • [SLOW TEST:48.604 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 12:59:11.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-93025744-23ed-445b-bab6-5fb09340e88b STEP: Creating a pod to test consume configMaps Jun 19 12:59:11.251: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b7ed2b90-61f2-4b57-878d-687002a05481" in namespace "projected-2751" to be "success or failure" Jun 19 12:59:11.266: INFO: Pod "pod-projected-configmaps-b7ed2b90-61f2-4b57-878d-687002a05481": Phase="Pending", Reason="", readiness=false. Elapsed: 14.999723ms Jun 19 12:59:13.270: INFO: Pod "pod-projected-configmaps-b7ed2b90-61f2-4b57-878d-687002a05481": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018662501s Jun 19 12:59:15.274: INFO: Pod "pod-projected-configmaps-b7ed2b90-61f2-4b57-878d-687002a05481": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022989996s STEP: Saw pod success Jun 19 12:59:15.274: INFO: Pod "pod-projected-configmaps-b7ed2b90-61f2-4b57-878d-687002a05481" satisfied condition "success or failure" Jun 19 12:59:15.278: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-b7ed2b90-61f2-4b57-878d-687002a05481 container projected-configmap-volume-test: STEP: delete the pod Jun 19 12:59:15.310: INFO: Waiting for pod pod-projected-configmaps-b7ed2b90-61f2-4b57-878d-687002a05481 to disappear Jun 19 12:59:15.315: INFO: Pod pod-projected-configmaps-b7ed2b90-61f2-4b57-878d-687002a05481 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 12:59:15.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2751" for this suite. Jun 19 12:59:21.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 12:59:21.412: INFO: namespace projected-2751 deletion completed in 6.09461552s • [SLOW TEST:10.230 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 12:59:21.413: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-bb66x in namespace proxy-8647 I0619 12:59:21.569076 6 runners.go:180] Created replication controller with name: proxy-service-bb66x, namespace: proxy-8647, replica count: 1 I0619 12:59:22.619627 6 runners.go:180] proxy-service-bb66x Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0619 12:59:23.619802 6 runners.go:180] proxy-service-bb66x Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0619 12:59:24.620081 6 runners.go:180] proxy-service-bb66x Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0619 12:59:25.620332 6 runners.go:180] proxy-service-bb66x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0619 12:59:26.620552 6 runners.go:180] proxy-service-bb66x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0619 12:59:27.620794 6 runners.go:180] proxy-service-bb66x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0619 12:59:28.621022 6 runners.go:180] proxy-service-bb66x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0619 12:59:29.621474 6 runners.go:180] proxy-service-bb66x Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0619 12:59:30.621694 6 runners.go:180] proxy-service-bb66x Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 19 12:59:30.624: INFO: setup took 9.149922268s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Jun 19 12:59:30.630: INFO: (0) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 5.974947ms) Jun 19 12:59:30.630: INFO: (0) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 5.636576ms) Jun 19 12:59:30.634: INFO: (0) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:1080/proxy/: ... (200; 9.143524ms) Jun 19 12:59:30.634: INFO: (0) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 8.826277ms) Jun 19 12:59:30.634: INFO: (0) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 9.414688ms) Jun 19 12:59:30.634: INFO: (0) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 9.432308ms) Jun 19 12:59:30.634: INFO: (0) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname2/proxy/: bar (200; 9.430649ms) Jun 19 12:59:30.634: INFO: (0) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 9.413252ms) Jun 19 12:59:30.634: INFO: (0) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 9.558449ms) Jun 19 12:59:30.634: INFO: (0) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname1/proxy/: foo (200; 9.860614ms) Jun 19 12:59:30.635: INFO: (0) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:1080/proxy/: test<... (200; 9.939316ms) Jun 19 12:59:30.646: INFO: (0) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 21.281086ms) Jun 19 12:59:30.646: INFO: (0) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 21.320177ms) Jun 19 12:59:30.646: INFO: (0) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: test<... (200; 4.132325ms) Jun 19 12:59:30.650: INFO: (1) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 4.230847ms) Jun 19 12:59:30.650: INFO: (1) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 4.127282ms) Jun 19 12:59:30.650: INFO: (1) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 4.148837ms) Jun 19 12:59:30.651: INFO: (1) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:1080/proxy/: ... (200; 4.133613ms) Jun 19 12:59:30.651: INFO: (1) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 4.178824ms) Jun 19 12:59:30.651: INFO: (1) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 4.401393ms) Jun 19 12:59:30.652: INFO: (1) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname1/proxy/: foo (200; 5.271061ms) Jun 19 12:59:30.652: INFO: (1) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname2/proxy/: bar (200; 5.266798ms) Jun 19 12:59:30.652: INFO: (1) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 5.416201ms) Jun 19 12:59:30.652: INFO: (1) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 5.50689ms) Jun 19 12:59:30.652: INFO: (1) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 5.620544ms) Jun 19 12:59:30.652: INFO: (1) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 5.653728ms) Jun 19 12:59:30.654: INFO: (2) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 2.079707ms) Jun 19 12:59:30.654: INFO: (2) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 2.242328ms) Jun 19 12:59:30.656: INFO: (2) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 3.983748ms) Jun 19 12:59:30.657: INFO: (2) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:1080/proxy/: test<... (200; 4.594313ms) Jun 19 12:59:30.657: INFO: (2) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 5.319391ms) Jun 19 12:59:30.657: INFO: (2) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname2/proxy/: bar (200; 5.303646ms) Jun 19 12:59:30.657: INFO: (2) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 5.3735ms) Jun 19 12:59:30.657: INFO: (2) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 5.332133ms) Jun 19 12:59:30.657: INFO: (2) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 5.43548ms) Jun 19 12:59:30.657: INFO: (2) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 5.434711ms) Jun 19 12:59:30.658: INFO: (2) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: ... (200; 5.404734ms) Jun 19 12:59:30.658: INFO: (2) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 5.516913ms) Jun 19 12:59:30.658: INFO: (2) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname1/proxy/: foo (200; 5.515381ms) Jun 19 12:59:30.658: INFO: (2) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 5.561325ms) Jun 19 12:59:30.658: INFO: (2) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 6.168592ms) Jun 19 12:59:30.661: INFO: (3) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 2.893818ms) Jun 19 12:59:30.661: INFO: (3) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 2.98772ms) Jun 19 12:59:30.661: INFO: (3) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 3.040606ms) Jun 19 12:59:30.661: INFO: (3) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 3.033169ms) Jun 19 12:59:30.662: INFO: (3) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:1080/proxy/: test<... (200; 3.649016ms) Jun 19 12:59:30.662: INFO: (3) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname2/proxy/: bar (200; 3.777012ms) Jun 19 12:59:30.663: INFO: (3) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 4.139955ms) Jun 19 12:59:30.663: INFO: (3) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 4.242748ms) Jun 19 12:59:30.663: INFO: (3) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 4.504095ms) Jun 19 12:59:30.663: INFO: (3) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 4.486633ms) Jun 19 12:59:30.663: INFO: (3) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname1/proxy/: foo (200; 4.464759ms) Jun 19 12:59:30.663: INFO: (3) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:1080/proxy/: ... (200; 4.397035ms) Jun 19 12:59:30.663: INFO: (3) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 4.570522ms) Jun 19 12:59:30.663: INFO: (3) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 4.885182ms) Jun 19 12:59:30.663: INFO: (3) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 4.783367ms) Jun 19 12:59:30.663: INFO: (3) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: test (200; 3.906421ms) Jun 19 12:59:30.667: INFO: (4) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 3.957794ms) Jun 19 12:59:30.667: INFO: (4) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 3.916262ms) Jun 19 12:59:30.667: INFO: (4) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:1080/proxy/: ... (200; 3.922989ms) Jun 19 12:59:30.668: INFO: (4) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:1080/proxy/: test<... (200; 4.451628ms) Jun 19 12:59:30.668: INFO: (4) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 5.129072ms) Jun 19 12:59:30.668: INFO: (4) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 5.193775ms) Jun 19 12:59:30.668: INFO: (4) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname2/proxy/: bar (200; 5.212362ms) Jun 19 12:59:30.669: INFO: (4) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname1/proxy/: foo (200; 5.259027ms) Jun 19 12:59:30.669: INFO: (4) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 5.281225ms) Jun 19 12:59:30.669: INFO: (4) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 5.203466ms) Jun 19 12:59:30.669: INFO: (4) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 5.201195ms) Jun 19 12:59:30.672: INFO: (5) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 3.865169ms) Jun 19 12:59:30.673: INFO: (5) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 4.242925ms) Jun 19 12:59:30.673: INFO: (5) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 4.441048ms) Jun 19 12:59:30.673: INFO: (5) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 4.477534ms) Jun 19 12:59:30.673: INFO: (5) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 4.472817ms) Jun 19 12:59:30.673: INFO: (5) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 4.752424ms) Jun 19 12:59:30.675: INFO: (5) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: test<... (200; 5.827113ms) Jun 19 12:59:30.675: INFO: (5) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 6.030962ms) Jun 19 12:59:30.675: INFO: (5) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:1080/proxy/: ... (200; 6.063963ms) Jun 19 12:59:30.676: INFO: (5) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 7.409286ms) Jun 19 12:59:30.676: INFO: (5) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname2/proxy/: bar (200; 7.414874ms) Jun 19 12:59:30.676: INFO: (5) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 7.807711ms) Jun 19 12:59:30.677: INFO: (5) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 7.710843ms) Jun 19 12:59:30.677: INFO: (5) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname1/proxy/: foo (200; 8.18488ms) Jun 19 12:59:30.677: INFO: (5) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 8.010357ms) Jun 19 12:59:30.679: INFO: (6) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 2.42577ms) Jun 19 12:59:30.680: INFO: (6) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: test<... (200; 2.574916ms) Jun 19 12:59:30.681: INFO: (6) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 2.886123ms) Jun 19 12:59:30.681: INFO: (6) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:1080/proxy/: ... (200; 3.396825ms) Jun 19 12:59:30.681: INFO: (6) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 3.26416ms) Jun 19 12:59:30.681: INFO: (6) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname2/proxy/: bar (200; 4.27658ms) Jun 19 12:59:30.681: INFO: (6) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 3.855327ms) Jun 19 12:59:30.682: INFO: (6) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 5.071186ms) Jun 19 12:59:30.682: INFO: (6) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 4.201187ms) Jun 19 12:59:30.682: INFO: (6) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 5.21227ms) Jun 19 12:59:30.682: INFO: (6) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname1/proxy/: foo (200; 5.042397ms) Jun 19 12:59:30.682: INFO: (6) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 4.439646ms) Jun 19 12:59:30.687: INFO: (7) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 4.299865ms) Jun 19 12:59:30.687: INFO: (7) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 4.366245ms) Jun 19 12:59:30.687: INFO: (7) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:1080/proxy/: test<... (200; 4.391209ms) Jun 19 12:59:30.687: INFO: (7) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 4.776551ms) Jun 19 12:59:30.688: INFO: (7) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 5.050128ms) Jun 19 12:59:30.688: INFO: (7) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 4.990791ms) Jun 19 12:59:30.688: INFO: (7) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 5.129717ms) Jun 19 12:59:30.688: INFO: (7) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: ... (200; 5.305238ms) Jun 19 12:59:30.688: INFO: (7) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 5.25337ms) Jun 19 12:59:30.688: INFO: (7) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 5.336897ms) Jun 19 12:59:30.688: INFO: (7) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname2/proxy/: bar (200; 5.391297ms) Jun 19 12:59:30.688: INFO: (7) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname1/proxy/: foo (200; 5.874298ms) Jun 19 12:59:30.688: INFO: (7) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 5.86952ms) Jun 19 12:59:30.688: INFO: (7) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 5.898167ms) Jun 19 12:59:30.688: INFO: (7) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 5.943902ms) Jun 19 12:59:30.693: INFO: (8) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 4.29369ms) Jun 19 12:59:30.694: INFO: (8) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 5.196634ms) Jun 19 12:59:30.694: INFO: (8) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname1/proxy/: foo (200; 5.373555ms) Jun 19 12:59:30.694: INFO: (8) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 5.235145ms) Jun 19 12:59:30.694: INFO: (8) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 5.627683ms) Jun 19 12:59:30.694: INFO: (8) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 5.620791ms) Jun 19 12:59:30.694: INFO: (8) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname2/proxy/: bar (200; 5.420615ms) Jun 19 12:59:30.694: INFO: (8) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 5.706561ms) Jun 19 12:59:30.694: INFO: (8) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 5.567069ms) Jun 19 12:59:30.694: INFO: (8) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 5.792947ms) Jun 19 12:59:30.695: INFO: (8) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: ... (200; 5.904026ms) Jun 19 12:59:30.695: INFO: (8) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 6.218148ms) Jun 19 12:59:30.695: INFO: (8) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 6.248714ms) Jun 19 12:59:30.695: INFO: (8) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:1080/proxy/: test<... (200; 6.044269ms) Jun 19 12:59:30.695: INFO: (8) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 6.216106ms) Jun 19 12:59:30.697: INFO: (9) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 2.109568ms) Jun 19 12:59:30.698: INFO: (9) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: ... (200; 4.26061ms) Jun 19 12:59:30.699: INFO: (9) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 4.182209ms) Jun 19 12:59:30.699: INFO: (9) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:1080/proxy/: test<... (200; 4.399354ms) Jun 19 12:59:30.700: INFO: (9) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 4.60441ms) Jun 19 12:59:30.700: INFO: (9) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 4.597734ms) Jun 19 12:59:30.700: INFO: (9) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 4.84937ms) Jun 19 12:59:30.700: INFO: (9) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 4.963196ms) Jun 19 12:59:30.700: INFO: (9) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 4.910378ms) Jun 19 12:59:30.703: INFO: (10) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 2.308815ms) Jun 19 12:59:30.704: INFO: (10) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 2.487672ms) Jun 19 12:59:30.704: INFO: (10) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:1080/proxy/: test<... (200; 2.910805ms) Jun 19 12:59:30.704: INFO: (10) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 2.948276ms) Jun 19 12:59:30.704: INFO: (10) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 3.035618ms) Jun 19 12:59:30.704: INFO: (10) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 3.602981ms) Jun 19 12:59:30.704: INFO: (10) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 3.278064ms) Jun 19 12:59:30.704: INFO: (10) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 3.799602ms) Jun 19 12:59:30.704: INFO: (10) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:1080/proxy/: ... (200; 3.19832ms) Jun 19 12:59:30.705: INFO: (10) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 4.710313ms) Jun 19 12:59:30.705: INFO: (10) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: test (200; 4.09194ms) Jun 19 12:59:30.711: INFO: (11) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:1080/proxy/: ... (200; 4.599675ms) Jun 19 12:59:30.711: INFO: (11) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 5.754504ms) Jun 19 12:59:30.711: INFO: (11) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:1080/proxy/: test<... (200; 4.886049ms) Jun 19 12:59:30.711: INFO: (11) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 5.056358ms) Jun 19 12:59:30.711: INFO: (11) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 5.955702ms) Jun 19 12:59:30.712: INFO: (11) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 4.978604ms) Jun 19 12:59:30.712: INFO: (11) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 5.571581ms) Jun 19 12:59:30.712: INFO: (11) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: test<... (200; 4.696227ms) Jun 19 12:59:30.717: INFO: (12) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 4.515349ms) Jun 19 12:59:30.717: INFO: (12) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:1080/proxy/: ... (200; 4.727307ms) Jun 19 12:59:30.717: INFO: (12) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname2/proxy/: bar (200; 4.736724ms) Jun 19 12:59:30.717: INFO: (12) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 4.815127ms) Jun 19 12:59:30.717: INFO: (12) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 4.602174ms) Jun 19 12:59:30.718: INFO: (12) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 5.05058ms) Jun 19 12:59:30.718: INFO: (12) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: ... (200; 2.954263ms) Jun 19 12:59:30.724: INFO: (13) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 4.936787ms) Jun 19 12:59:30.724: INFO: (13) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:1080/proxy/: test<... (200; 5.093309ms) Jun 19 12:59:30.724: INFO: (13) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 5.126479ms) Jun 19 12:59:30.724: INFO: (13) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 5.624613ms) Jun 19 12:59:30.724: INFO: (13) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 5.48192ms) Jun 19 12:59:30.724: INFO: (13) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 5.461208ms) Jun 19 12:59:30.724: INFO: (13) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 5.654964ms) Jun 19 12:59:30.724: INFO: (13) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 5.609758ms) Jun 19 12:59:30.724: INFO: (13) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname1/proxy/: foo (200; 5.776057ms) Jun 19 12:59:30.724: INFO: (13) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 5.747374ms) Jun 19 12:59:30.725: INFO: (13) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 6.288057ms) Jun 19 12:59:30.725: INFO: (13) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 6.307948ms) Jun 19 12:59:30.725: INFO: (13) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 6.283795ms) Jun 19 12:59:30.725: INFO: (13) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: ... (200; 4.709318ms) Jun 19 12:59:30.730: INFO: (14) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 4.691966ms) Jun 19 12:59:30.730: INFO: (14) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 4.807647ms) Jun 19 12:59:30.730: INFO: (14) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 4.805156ms) Jun 19 12:59:30.730: INFO: (14) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 4.825672ms) Jun 19 12:59:30.730: INFO: (14) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 4.828872ms) Jun 19 12:59:30.730: INFO: (14) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 4.801557ms) Jun 19 12:59:30.730: INFO: (14) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:1080/proxy/: test<... (200; 4.872406ms) Jun 19 12:59:30.730: INFO: (14) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: ... (200; 3.618494ms) Jun 19 12:59:30.734: INFO: (15) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 3.616071ms) Jun 19 12:59:30.734: INFO: (15) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 3.562692ms) Jun 19 12:59:30.734: INFO: (15) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 3.507154ms) Jun 19 12:59:30.734: INFO: (15) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 3.702221ms) Jun 19 12:59:30.734: INFO: (15) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 3.767389ms) Jun 19 12:59:30.734: INFO: (15) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: test<... (200; 3.921449ms) Jun 19 12:59:30.735: INFO: (15) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 4.794356ms) Jun 19 12:59:30.735: INFO: (15) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname1/proxy/: foo (200; 5.386947ms) Jun 19 12:59:30.736: INFO: (15) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 4.768489ms) Jun 19 12:59:30.736: INFO: (15) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname2/proxy/: bar (200; 5.110525ms) Jun 19 12:59:30.736: INFO: (15) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 5.014813ms) Jun 19 12:59:30.736: INFO: (15) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 4.970042ms) Jun 19 12:59:30.740: INFO: (16) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 4.453555ms) Jun 19 12:59:30.740: INFO: (16) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 4.504548ms) Jun 19 12:59:30.740: INFO: (16) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:1080/proxy/: test<... (200; 4.463124ms) Jun 19 12:59:30.740: INFO: (16) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 4.536178ms) Jun 19 12:59:30.740: INFO: (16) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 4.596883ms) Jun 19 12:59:30.740: INFO: (16) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 4.569652ms) Jun 19 12:59:30.740: INFO: (16) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: ... (200; 5.207546ms) Jun 19 12:59:30.741: INFO: (16) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 5.209701ms) Jun 19 12:59:30.741: INFO: (16) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 5.240796ms) Jun 19 12:59:30.741: INFO: (16) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 5.185622ms) Jun 19 12:59:30.741: INFO: (16) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 5.191005ms) Jun 19 12:59:30.741: INFO: (16) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname1/proxy/: foo (200; 5.21643ms) Jun 19 12:59:30.741: INFO: (16) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname2/proxy/: bar (200; 5.252398ms) Jun 19 12:59:30.741: INFO: (16) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 5.271922ms) Jun 19 12:59:30.741: INFO: (16) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 5.288052ms) Jun 19 12:59:30.744: INFO: (17) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 2.973508ms) Jun 19 12:59:30.744: INFO: (17) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 2.998753ms) Jun 19 12:59:30.744: INFO: (17) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:1080/proxy/: ... (200; 3.03999ms) Jun 19 12:59:30.745: INFO: (17) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 3.398442ms) Jun 19 12:59:30.745: INFO: (17) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:1080/proxy/: test<... (200; 3.370178ms) Jun 19 12:59:30.745: INFO: (17) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 3.438644ms) Jun 19 12:59:30.745: INFO: (17) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 3.472653ms) Jun 19 12:59:30.745: INFO: (17) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: test<... (200; 3.18111ms) Jun 19 12:59:30.751: INFO: (18) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 3.440426ms) Jun 19 12:59:30.751: INFO: (18) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 3.493417ms) Jun 19 12:59:30.751: INFO: (18) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 3.336505ms) Jun 19 12:59:30.751: INFO: (18) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 3.53065ms) Jun 19 12:59:30.751: INFO: (18) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: ... (200; 3.545314ms) Jun 19 12:59:30.751: INFO: (18) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 3.749915ms) Jun 19 12:59:30.751: INFO: (18) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 3.76993ms) Jun 19 12:59:30.753: INFO: (18) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 5.010512ms) Jun 19 12:59:30.753: INFO: (18) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 5.028513ms) Jun 19 12:59:30.753: INFO: (18) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 5.115857ms) Jun 19 12:59:30.753: INFO: (18) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname2/proxy/: bar (200; 5.012826ms) Jun 19 12:59:30.753: INFO: (18) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 5.138039ms) Jun 19 12:59:30.762: INFO: (19) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname2/proxy/: bar (200; 9.150026ms) Jun 19 12:59:30.762: INFO: (19) /api/v1/namespaces/proxy-8647/services/proxy-service-bb66x:portname1/proxy/: foo (200; 9.217985ms) Jun 19 12:59:30.762: INFO: (19) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n/proxy/: test (200; 9.110271ms) Jun 19 12:59:30.763: INFO: (19) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 10.087935ms) Jun 19 12:59:30.763: INFO: (19) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:1080/proxy/: ... (200; 10.063474ms) Jun 19 12:59:30.763: INFO: (19) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname2/proxy/: bar (200; 10.335401ms) Jun 19 12:59:30.763: INFO: (19) /api/v1/namespaces/proxy-8647/services/http:proxy-service-bb66x:portname1/proxy/: foo (200; 10.284561ms) Jun 19 12:59:30.763: INFO: (19) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 10.391419ms) Jun 19 12:59:30.763: INFO: (19) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:462/proxy/: tls qux (200; 10.411077ms) Jun 19 12:59:30.763: INFO: (19) /api/v1/namespaces/proxy-8647/pods/http:proxy-service-bb66x-qrj4n:162/proxy/: bar (200; 10.413402ms) Jun 19 12:59:30.763: INFO: (19) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:460/proxy/: tls baz (200; 10.405414ms) Jun 19 12:59:30.763: INFO: (19) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:160/proxy/: foo (200; 10.410581ms) Jun 19 12:59:30.763: INFO: (19) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname1/proxy/: tls baz (200; 10.472157ms) Jun 19 12:59:30.763: INFO: (19) /api/v1/namespaces/proxy-8647/services/https:proxy-service-bb66x:tlsportname2/proxy/: tls qux (200; 10.423871ms) Jun 19 12:59:30.763: INFO: (19) /api/v1/namespaces/proxy-8647/pods/proxy-service-bb66x-qrj4n:1080/proxy/: test<... (200; 10.438838ms) Jun 19 12:59:30.763: INFO: (19) /api/v1/namespaces/proxy-8647/pods/https:proxy-service-bb66x-qrj4n:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 19 12:59:48.141: INFO: Waiting up to 5m0s for pod "pod-98d1da18-ab3d-4563-b198-10132cb9bb8c" in namespace "emptydir-7520" to be "success or failure" Jun 19 12:59:48.156: INFO: Pod "pod-98d1da18-ab3d-4563-b198-10132cb9bb8c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.149531ms Jun 19 12:59:50.160: INFO: Pod "pod-98d1da18-ab3d-4563-b198-10132cb9bb8c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018697239s Jun 19 12:59:52.163: INFO: Pod "pod-98d1da18-ab3d-4563-b198-10132cb9bb8c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022313443s STEP: Saw pod success Jun 19 12:59:52.163: INFO: Pod "pod-98d1da18-ab3d-4563-b198-10132cb9bb8c" satisfied condition "success or failure" Jun 19 12:59:52.166: INFO: Trying to get logs from node iruya-worker pod pod-98d1da18-ab3d-4563-b198-10132cb9bb8c container test-container: STEP: delete the pod Jun 19 12:59:52.185: INFO: Waiting for pod pod-98d1da18-ab3d-4563-b198-10132cb9bb8c to disappear Jun 19 12:59:52.190: INFO: Pod pod-98d1da18-ab3d-4563-b198-10132cb9bb8c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 12:59:52.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7520" for this suite. Jun 19 12:59:58.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 12:59:58.289: INFO: namespace emptydir-7520 deletion completed in 6.096698891s • [SLOW TEST:10.278 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 12:59:58.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 12:59:58.342: INFO: Creating deployment "test-recreate-deployment" Jun 19 12:59:58.369: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jun 19 12:59:58.407: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jun 19 13:00:00.415: INFO: Waiting deployment "test-recreate-deployment" to complete Jun 19 13:00:00.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728168398, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728168398, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728168398, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728168398, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 19 13:00:02.422: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jun 19 13:00:02.428: INFO: Updating deployment test-recreate-deployment Jun 19 13:00:02.428: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 19 13:00:02.684: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-7660,SelfLink:/apis/apps/v1/namespaces/deployment-7660/deployments/test-recreate-deployment,UID:03d9255e-ffe9-4e0e-a459-8610e2e4a4ca,ResourceVersion:17311871,Generation:2,CreationTimestamp:2020-06-19 12:59:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-06-19 13:00:02 +0000 UTC 2020-06-19 13:00:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-06-19 13:00:02 +0000 UTC 2020-06-19 12:59:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jun 19 13:00:02.727: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-7660,SelfLink:/apis/apps/v1/namespaces/deployment-7660/replicasets/test-recreate-deployment-5c8c9cc69d,UID:fccb37da-9a87-44ac-a27b-6d1e8abfb5c7,ResourceVersion:17311870,Generation:1,CreationTimestamp:2020-06-19 13:00:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 03d9255e-ffe9-4e0e-a459-8610e2e4a4ca 0xc002b2be07 0xc002b2be08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 19 13:00:02.727: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jun 19 13:00:02.727: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-7660,SelfLink:/apis/apps/v1/namespaces/deployment-7660/replicasets/test-recreate-deployment-6df85df6b9,UID:8c6b6014-b19c-41f5-b1b5-ff5af2a45fc2,ResourceVersion:17311859,Generation:2,CreationTimestamp:2020-06-19 12:59:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 03d9255e-ffe9-4e0e-a459-8610e2e4a4ca 0xc002b2bed7 0xc002b2bed8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 19 13:00:02.731: INFO: Pod "test-recreate-deployment-5c8c9cc69d-lwtnv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-lwtnv,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-7660,SelfLink:/api/v1/namespaces/deployment-7660/pods/test-recreate-deployment-5c8c9cc69d-lwtnv,UID:90e0f9a7-3ae7-4e76-8d0b-ae7e09098226,ResourceVersion:17311872,Generation:0,CreationTimestamp:2020-06-19 13:00:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d fccb37da-9a87-44ac-a27b-6d1e8abfb5c7 0xc002b867a7 0xc002b867a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-prz56 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-prz56,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-prz56 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b86820} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b86840}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:00:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:00:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:00:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:00:02 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-19 13:00:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:00:02.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7660" for this suite. Jun 19 13:00:08.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:00:08.916: INFO: namespace deployment-7660 deletion completed in 6.181401979s • [SLOW TEST:10.626 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:00:08.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-18f759a1-2214-4360-9571-803770210e5f STEP: Creating a pod to test consume secrets Jun 19 13:00:09.026: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-207cfb73-9d27-4264-a9e7-9a58e7178bda" in namespace "projected-9897" to be "success or failure" Jun 19 13:00:09.050: INFO: Pod "pod-projected-secrets-207cfb73-9d27-4264-a9e7-9a58e7178bda": Phase="Pending", Reason="", readiness=false. Elapsed: 23.66171ms Jun 19 13:00:11.054: INFO: Pod "pod-projected-secrets-207cfb73-9d27-4264-a9e7-9a58e7178bda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028000529s Jun 19 13:00:13.059: INFO: Pod "pod-projected-secrets-207cfb73-9d27-4264-a9e7-9a58e7178bda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032645817s STEP: Saw pod success Jun 19 13:00:13.059: INFO: Pod "pod-projected-secrets-207cfb73-9d27-4264-a9e7-9a58e7178bda" satisfied condition "success or failure" Jun 19 13:00:13.062: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-207cfb73-9d27-4264-a9e7-9a58e7178bda container projected-secret-volume-test: STEP: delete the pod Jun 19 13:00:13.095: INFO: Waiting for pod pod-projected-secrets-207cfb73-9d27-4264-a9e7-9a58e7178bda to disappear Jun 19 13:00:13.109: INFO: Pod pod-projected-secrets-207cfb73-9d27-4264-a9e7-9a58e7178bda no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:00:13.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9897" for this suite. Jun 19 13:00:19.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:00:19.206: INFO: namespace projected-9897 deletion completed in 6.092353957s • [SLOW TEST:10.289 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:00:19.206: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-38a94f57-b0d7-4597-9dd8-f669461e340c in namespace container-probe-323 Jun 19 13:00:23.291: INFO: Started pod liveness-38a94f57-b0d7-4597-9dd8-f669461e340c in namespace container-probe-323 STEP: checking the pod's current state and verifying that restartCount is present Jun 19 13:00:23.294: INFO: Initial restart count of pod liveness-38a94f57-b0d7-4597-9dd8-f669461e340c is 0 Jun 19 13:00:43.344: INFO: Restart count of pod container-probe-323/liveness-38a94f57-b0d7-4597-9dd8-f669461e340c is now 1 (20.050021295s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:00:43.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-323" for this suite. Jun 19 13:00:49.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:00:49.510: INFO: namespace container-probe-323 deletion completed in 6.126587182s • [SLOW TEST:30.304 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:00:49.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-rphb STEP: Creating a pod to test atomic-volume-subpath Jun 19 13:00:49.608: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-rphb" in namespace "subpath-7181" to be "success or failure" Jun 19 13:00:49.612: INFO: Pod "pod-subpath-test-secret-rphb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.230832ms Jun 19 13:00:51.631: INFO: Pod "pod-subpath-test-secret-rphb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023053596s Jun 19 13:00:53.636: INFO: Pod "pod-subpath-test-secret-rphb": Phase="Running", Reason="", readiness=true. Elapsed: 4.028299034s Jun 19 13:00:55.641: INFO: Pod "pod-subpath-test-secret-rphb": Phase="Running", Reason="", readiness=true. Elapsed: 6.032933199s Jun 19 13:00:57.645: INFO: Pod "pod-subpath-test-secret-rphb": Phase="Running", Reason="", readiness=true. Elapsed: 8.036891028s Jun 19 13:00:59.650: INFO: Pod "pod-subpath-test-secret-rphb": Phase="Running", Reason="", readiness=true. Elapsed: 10.041477002s Jun 19 13:01:01.656: INFO: Pod "pod-subpath-test-secret-rphb": Phase="Running", Reason="", readiness=true. Elapsed: 12.047922768s Jun 19 13:01:03.675: INFO: Pod "pod-subpath-test-secret-rphb": Phase="Running", Reason="", readiness=true. Elapsed: 14.066583824s Jun 19 13:01:05.679: INFO: Pod "pod-subpath-test-secret-rphb": Phase="Running", Reason="", readiness=true. Elapsed: 16.071129075s Jun 19 13:01:07.683: INFO: Pod "pod-subpath-test-secret-rphb": Phase="Running", Reason="", readiness=true. Elapsed: 18.074469132s Jun 19 13:01:09.687: INFO: Pod "pod-subpath-test-secret-rphb": Phase="Running", Reason="", readiness=true. Elapsed: 20.078946293s Jun 19 13:01:11.691: INFO: Pod "pod-subpath-test-secret-rphb": Phase="Running", Reason="", readiness=true. Elapsed: 22.083369999s Jun 19 13:01:13.775: INFO: Pod "pod-subpath-test-secret-rphb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.167265001s STEP: Saw pod success Jun 19 13:01:13.775: INFO: Pod "pod-subpath-test-secret-rphb" satisfied condition "success or failure" Jun 19 13:01:13.778: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-rphb container test-container-subpath-secret-rphb: STEP: delete the pod Jun 19 13:01:13.798: INFO: Waiting for pod pod-subpath-test-secret-rphb to disappear Jun 19 13:01:13.802: INFO: Pod pod-subpath-test-secret-rphb no longer exists STEP: Deleting pod pod-subpath-test-secret-rphb Jun 19 13:01:13.802: INFO: Deleting pod "pod-subpath-test-secret-rphb" in namespace "subpath-7181" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:01:13.804: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7181" for this suite. Jun 19 13:01:19.852: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:01:19.934: INFO: namespace subpath-7181 deletion completed in 6.126528105s • [SLOW TEST:30.423 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:01:19.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Jun 19 13:01:20.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9817' Jun 19 13:01:20.407: INFO: stderr: "" Jun 19 13:01:20.407: INFO: stdout: "pod/pause created\n" Jun 19 13:01:20.407: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jun 19 13:01:20.408: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-9817" to be "running and ready" Jun 19 13:01:20.414: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.17956ms Jun 19 13:01:22.417: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009654532s Jun 19 13:01:24.424: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.016506132s Jun 19 13:01:24.424: INFO: Pod "pause" satisfied condition "running and ready" Jun 19 13:01:24.424: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Jun 19 13:01:24.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-9817' Jun 19 13:01:24.527: INFO: stderr: "" Jun 19 13:01:24.527: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jun 19 13:01:24.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9817' Jun 19 13:01:24.634: INFO: stderr: "" Jun 19 13:01:24.634: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Jun 19 13:01:24.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-9817' Jun 19 13:01:24.740: INFO: stderr: "" Jun 19 13:01:24.740: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jun 19 13:01:24.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-9817' Jun 19 13:01:24.873: INFO: stderr: "" Jun 19 13:01:24.873: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Jun 19 13:01:24.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9817' Jun 19 13:01:25.021: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 19 13:01:25.021: INFO: stdout: "pod \"pause\" force deleted\n" Jun 19 13:01:25.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-9817' Jun 19 13:01:25.285: INFO: stderr: "No resources found.\n" Jun 19 13:01:25.285: INFO: stdout: "" Jun 19 13:01:25.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-9817 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 19 13:01:25.376: INFO: stderr: "" Jun 19 13:01:25.376: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:01:25.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9817" for this suite. Jun 19 13:01:31.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:01:31.502: INFO: namespace kubectl-9817 deletion completed in 6.095959593s • [SLOW TEST:11.568 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:01:31.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 13:01:31.618: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Jun 19 13:01:31.626: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:31.629: INFO: Number of nodes with available pods: 0 Jun 19 13:01:31.629: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:01:32.634: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:32.637: INFO: Number of nodes with available pods: 0 Jun 19 13:01:32.637: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:01:33.790: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:33.793: INFO: Number of nodes with available pods: 0 Jun 19 13:01:33.793: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:01:34.664: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:34.667: INFO: Number of nodes with available pods: 0 Jun 19 13:01:34.667: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:01:35.635: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:35.638: INFO: Number of nodes with available pods: 2 Jun 19 13:01:35.638: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Jun 19 13:01:35.706: INFO: Wrong image for pod: daemon-set-qw42n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:35.706: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:35.732: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:36.737: INFO: Wrong image for pod: daemon-set-qw42n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:36.737: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:36.742: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:37.736: INFO: Wrong image for pod: daemon-set-qw42n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:37.736: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:37.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:38.737: INFO: Wrong image for pod: daemon-set-qw42n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:38.737: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:38.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:39.737: INFO: Wrong image for pod: daemon-set-qw42n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:39.737: INFO: Pod daemon-set-qw42n is not available Jun 19 13:01:39.737: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:39.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:40.737: INFO: Pod daemon-set-vmz82 is not available Jun 19 13:01:40.737: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:40.740: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:41.736: INFO: Pod daemon-set-vmz82 is not available Jun 19 13:01:41.736: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:41.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:42.736: INFO: Pod daemon-set-vmz82 is not available Jun 19 13:01:42.736: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:42.740: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:43.813: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:43.818: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:44.736: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:44.736: INFO: Pod daemon-set-xgg5n is not available Jun 19 13:01:44.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:45.737: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:45.737: INFO: Pod daemon-set-xgg5n is not available Jun 19 13:01:45.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:46.737: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:46.737: INFO: Pod daemon-set-xgg5n is not available Jun 19 13:01:46.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:47.737: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:47.737: INFO: Pod daemon-set-xgg5n is not available Jun 19 13:01:47.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:48.736: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:48.737: INFO: Pod daemon-set-xgg5n is not available Jun 19 13:01:48.740: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:49.736: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:49.736: INFO: Pod daemon-set-xgg5n is not available Jun 19 13:01:49.740: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:50.736: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:50.736: INFO: Pod daemon-set-xgg5n is not available Jun 19 13:01:50.740: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:51.737: INFO: Wrong image for pod: daemon-set-xgg5n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Jun 19 13:01:51.737: INFO: Pod daemon-set-xgg5n is not available Jun 19 13:01:51.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:52.736: INFO: Pod daemon-set-qg8nr is not available Jun 19 13:01:52.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Jun 19 13:01:52.745: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:52.748: INFO: Number of nodes with available pods: 1 Jun 19 13:01:52.748: INFO: Node iruya-worker2 is running more than one daemon pod Jun 19 13:01:53.895: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:53.899: INFO: Number of nodes with available pods: 1 Jun 19 13:01:53.899: INFO: Node iruya-worker2 is running more than one daemon pod Jun 19 13:01:54.753: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:54.755: INFO: Number of nodes with available pods: 1 Jun 19 13:01:54.755: INFO: Node iruya-worker2 is running more than one daemon pod Jun 19 13:01:55.752: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:01:55.755: INFO: Number of nodes with available pods: 2 Jun 19 13:01:55.755: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8617, will wait for the garbage collector to delete the pods Jun 19 13:01:55.829: INFO: Deleting DaemonSet.extensions daemon-set took: 6.824436ms Jun 19 13:01:56.129: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.343209ms Jun 19 13:02:01.932: INFO: Number of nodes with available pods: 0 Jun 19 13:02:01.932: INFO: Number of running nodes: 0, number of available pods: 0 Jun 19 13:02:01.936: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8617/daemonsets","resourceVersion":"17312327"},"items":null} Jun 19 13:02:01.939: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8617/pods","resourceVersion":"17312327"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:02:01.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8617" for this suite. Jun 19 13:02:07.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:02:08.047: INFO: namespace daemonsets-8617 deletion completed in 6.095602966s • [SLOW TEST:36.545 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:02:08.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-2270 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jun 19 13:02:08.154: INFO: Found 0 stateful pods, waiting for 3 Jun 19 13:02:18.158: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 19 13:02:18.158: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 19 13:02:18.158: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 19 13:02:28.160: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 19 13:02:28.160: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 19 13:02:28.160: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 19 13:02:28.187: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Jun 19 13:02:38.228: INFO: Updating stateful set ss2 Jun 19 13:02:38.245: INFO: Waiting for Pod statefulset-2270/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Jun 19 13:02:48.903: INFO: Found 2 stateful pods, waiting for 3 Jun 19 13:02:58.909: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 19 13:02:58.909: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 19 13:02:58.909: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Jun 19 13:02:58.932: INFO: Updating stateful set ss2 Jun 19 13:02:58.990: INFO: Waiting for Pod statefulset-2270/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 19 13:03:08.998: INFO: Waiting for Pod statefulset-2270/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 19 13:03:19.013: INFO: Updating stateful set ss2 Jun 19 13:03:19.052: INFO: Waiting for StatefulSet statefulset-2270/ss2 to complete update Jun 19 13:03:19.052: INFO: Waiting for Pod statefulset-2270/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Jun 19 13:03:29.061: INFO: Waiting for StatefulSet statefulset-2270/ss2 to complete update Jun 19 13:03:29.061: INFO: Waiting for Pod statefulset-2270/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 19 13:03:39.074: INFO: Deleting all statefulset in ns statefulset-2270 Jun 19 13:03:39.076: INFO: Scaling statefulset ss2 to 0 Jun 19 13:03:59.093: INFO: Waiting for statefulset status.replicas updated to 0 Jun 19 13:03:59.095: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:03:59.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2270" for this suite. Jun 19 13:04:05.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:04:05.240: INFO: namespace statefulset-2270 deletion completed in 6.085961238s • [SLOW TEST:117.193 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:04:05.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jun 19 13:04:10.385: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:04:11.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2303" for this suite. Jun 19 13:04:33.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:04:33.557: INFO: namespace replicaset-2303 deletion completed in 22.138841551s • [SLOW TEST:28.317 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:04:33.557: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-wpls STEP: Creating a pod to test atomic-volume-subpath Jun 19 13:04:33.682: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-wpls" in namespace "subpath-8494" to be "success or failure" Jun 19 13:04:33.716: INFO: Pod "pod-subpath-test-projected-wpls": Phase="Pending", Reason="", readiness=false. Elapsed: 33.445689ms Jun 19 13:04:35.720: INFO: Pod "pod-subpath-test-projected-wpls": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038016534s Jun 19 13:04:37.726: INFO: Pod "pod-subpath-test-projected-wpls": Phase="Running", Reason="", readiness=true. Elapsed: 4.043357819s Jun 19 13:04:39.730: INFO: Pod "pod-subpath-test-projected-wpls": Phase="Running", Reason="", readiness=true. Elapsed: 6.047497315s Jun 19 13:04:41.734: INFO: Pod "pod-subpath-test-projected-wpls": Phase="Running", Reason="", readiness=true. Elapsed: 8.051471272s Jun 19 13:04:43.738: INFO: Pod "pod-subpath-test-projected-wpls": Phase="Running", Reason="", readiness=true. Elapsed: 10.055837619s Jun 19 13:04:45.742: INFO: Pod "pod-subpath-test-projected-wpls": Phase="Running", Reason="", readiness=true. Elapsed: 12.060077802s Jun 19 13:04:47.747: INFO: Pod "pod-subpath-test-projected-wpls": Phase="Running", Reason="", readiness=true. Elapsed: 14.064367406s Jun 19 13:04:49.751: INFO: Pod "pod-subpath-test-projected-wpls": Phase="Running", Reason="", readiness=true. Elapsed: 16.068600866s Jun 19 13:04:51.755: INFO: Pod "pod-subpath-test-projected-wpls": Phase="Running", Reason="", readiness=true. Elapsed: 18.072884953s Jun 19 13:04:53.760: INFO: Pod "pod-subpath-test-projected-wpls": Phase="Running", Reason="", readiness=true. Elapsed: 20.077321671s Jun 19 13:04:55.764: INFO: Pod "pod-subpath-test-projected-wpls": Phase="Running", Reason="", readiness=true. Elapsed: 22.08183803s Jun 19 13:04:57.769: INFO: Pod "pod-subpath-test-projected-wpls": Phase="Running", Reason="", readiness=true. Elapsed: 24.086581185s Jun 19 13:04:59.773: INFO: Pod "pod-subpath-test-projected-wpls": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.090849626s STEP: Saw pod success Jun 19 13:04:59.773: INFO: Pod "pod-subpath-test-projected-wpls" satisfied condition "success or failure" Jun 19 13:04:59.777: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-projected-wpls container test-container-subpath-projected-wpls: STEP: delete the pod Jun 19 13:04:59.824: INFO: Waiting for pod pod-subpath-test-projected-wpls to disappear Jun 19 13:04:59.838: INFO: Pod pod-subpath-test-projected-wpls no longer exists STEP: Deleting pod pod-subpath-test-projected-wpls Jun 19 13:04:59.838: INFO: Deleting pod "pod-subpath-test-projected-wpls" in namespace "subpath-8494" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:04:59.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8494" for this suite. Jun 19 13:05:05.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:05:05.948: INFO: namespace subpath-8494 deletion completed in 6.104811441s • [SLOW TEST:32.391 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:05:05.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 19 13:05:10.567: INFO: Successfully updated pod "labelsupdatee51dc075-927b-4b5c-b34b-0c0007a321f5" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:05:14.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1090" for this suite. Jun 19 13:05:36.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:05:36.700: INFO: namespace downward-api-1090 deletion completed in 22.102600582s • [SLOW TEST:30.751 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:05:36.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 13:05:40.839: INFO: Waiting up to 5m0s for pod "client-envvars-6c2b5ed5-42e1-40ef-a011-40f6f4815d62" in namespace "pods-1656" to be "success or failure" Jun 19 13:05:40.871: INFO: Pod "client-envvars-6c2b5ed5-42e1-40ef-a011-40f6f4815d62": Phase="Pending", Reason="", readiness=false. Elapsed: 31.172921ms Jun 19 13:05:42.875: INFO: Pod "client-envvars-6c2b5ed5-42e1-40ef-a011-40f6f4815d62": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035535471s Jun 19 13:05:44.878: INFO: Pod "client-envvars-6c2b5ed5-42e1-40ef-a011-40f6f4815d62": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038750575s STEP: Saw pod success Jun 19 13:05:44.878: INFO: Pod "client-envvars-6c2b5ed5-42e1-40ef-a011-40f6f4815d62" satisfied condition "success or failure" Jun 19 13:05:44.880: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-6c2b5ed5-42e1-40ef-a011-40f6f4815d62 container env3cont: STEP: delete the pod Jun 19 13:05:44.901: INFO: Waiting for pod client-envvars-6c2b5ed5-42e1-40ef-a011-40f6f4815d62 to disappear Jun 19 13:05:44.914: INFO: Pod client-envvars-6c2b5ed5-42e1-40ef-a011-40f6f4815d62 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:05:44.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1656" for this suite. Jun 19 13:06:22.939: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:06:23.018: INFO: namespace pods-1656 deletion completed in 38.09881385s • [SLOW TEST:46.318 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:06:23.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-4889a152-6b3b-4cd2-b0a8-18fe7c7af078 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-4889a152-6b3b-4cd2-b0a8-18fe7c7af078 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:06:29.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5927" for this suite. Jun 19 13:06:51.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:06:51.440: INFO: namespace configmap-5927 deletion completed in 22.111568829s • [SLOW TEST:28.422 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:06:51.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-b796dcf0-445b-4aba-899a-91ec313155c8 STEP: Creating a pod to test consume secrets Jun 19 13:06:51.544: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fe80f672-e068-4faa-8daa-c2a20b20a0de" in namespace "projected-5379" to be "success or failure" Jun 19 13:06:51.571: INFO: Pod "pod-projected-secrets-fe80f672-e068-4faa-8daa-c2a20b20a0de": Phase="Pending", Reason="", readiness=false. Elapsed: 27.336619ms Jun 19 13:06:53.576: INFO: Pod "pod-projected-secrets-fe80f672-e068-4faa-8daa-c2a20b20a0de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032071307s Jun 19 13:06:55.580: INFO: Pod "pod-projected-secrets-fe80f672-e068-4faa-8daa-c2a20b20a0de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03643625s STEP: Saw pod success Jun 19 13:06:55.580: INFO: Pod "pod-projected-secrets-fe80f672-e068-4faa-8daa-c2a20b20a0de" satisfied condition "success or failure" Jun 19 13:06:55.584: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-fe80f672-e068-4faa-8daa-c2a20b20a0de container projected-secret-volume-test: STEP: delete the pod Jun 19 13:06:55.602: INFO: Waiting for pod pod-projected-secrets-fe80f672-e068-4faa-8daa-c2a20b20a0de to disappear Jun 19 13:06:55.607: INFO: Pod pod-projected-secrets-fe80f672-e068-4faa-8daa-c2a20b20a0de no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:06:55.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5379" for this suite. Jun 19 13:07:01.642: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:07:01.716: INFO: namespace projected-5379 deletion completed in 6.105800515s • [SLOW TEST:10.275 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:07:01.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-4587 I0619 13:07:01.824955 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-4587, replica count: 1 I0619 13:07:02.875485 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0619 13:07:03.875742 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0619 13:07:04.876011 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Jun 19 13:07:05.028: INFO: Created: latency-svc-t6z6w Jun 19 13:07:05.038: INFO: Got endpoints: latency-svc-t6z6w [62.670142ms] Jun 19 13:07:05.085: INFO: Created: latency-svc-kpmmq Jun 19 13:07:05.095: INFO: Got endpoints: latency-svc-kpmmq [55.801734ms] Jun 19 13:07:05.148: INFO: Created: latency-svc-tj8ht Jun 19 13:07:05.163: INFO: Got endpoints: latency-svc-tj8ht [123.96309ms] Jun 19 13:07:05.190: INFO: Created: latency-svc-6gjzq Jun 19 13:07:05.202: INFO: Got endpoints: latency-svc-6gjzq [162.912499ms] Jun 19 13:07:05.219: INFO: Created: latency-svc-mpg5w Jun 19 13:07:05.232: INFO: Got endpoints: latency-svc-mpg5w [193.417272ms] Jun 19 13:07:05.291: INFO: Created: latency-svc-xzcdm Jun 19 13:07:05.295: INFO: Got endpoints: latency-svc-xzcdm [256.161758ms] Jun 19 13:07:05.322: INFO: Created: latency-svc-c2pks Jun 19 13:07:05.335: INFO: Got endpoints: latency-svc-c2pks [296.336488ms] Jun 19 13:07:05.355: INFO: Created: latency-svc-6wcsq Jun 19 13:07:05.370: INFO: Got endpoints: latency-svc-6wcsq [331.804557ms] Jun 19 13:07:05.435: INFO: Created: latency-svc-s2mq2 Jun 19 13:07:05.439: INFO: Got endpoints: latency-svc-s2mq2 [400.342883ms] Jun 19 13:07:05.468: INFO: Created: latency-svc-bd42p Jun 19 13:07:05.495: INFO: Got endpoints: latency-svc-bd42p [456.58199ms] Jun 19 13:07:05.519: INFO: Created: latency-svc-xmktw Jun 19 13:07:05.533: INFO: Got endpoints: latency-svc-xmktw [494.513858ms] Jun 19 13:07:05.586: INFO: Created: latency-svc-wg7dt Jun 19 13:07:05.587: INFO: Got endpoints: latency-svc-wg7dt [548.302935ms] Jun 19 13:07:05.612: INFO: Created: latency-svc-s7c4j Jun 19 13:07:05.623: INFO: Got endpoints: latency-svc-s7c4j [584.596374ms] Jun 19 13:07:05.642: INFO: Created: latency-svc-t7x8h Jun 19 13:07:05.666: INFO: Got endpoints: latency-svc-t7x8h [627.545907ms] Jun 19 13:07:05.728: INFO: Created: latency-svc-22887 Jun 19 13:07:05.731: INFO: Got endpoints: latency-svc-22887 [692.601487ms] Jun 19 13:07:05.778: INFO: Created: latency-svc-jqmzx Jun 19 13:07:05.804: INFO: Got endpoints: latency-svc-jqmzx [765.912924ms] Jun 19 13:07:05.823: INFO: Created: latency-svc-458xk Jun 19 13:07:05.866: INFO: Got endpoints: latency-svc-458xk [771.332846ms] Jun 19 13:07:05.870: INFO: Created: latency-svc-fgll4 Jun 19 13:07:05.894: INFO: Got endpoints: latency-svc-fgll4 [731.798252ms] Jun 19 13:07:05.922: INFO: Created: latency-svc-42nhg Jun 19 13:07:05.938: INFO: Got endpoints: latency-svc-42nhg [736.566135ms] Jun 19 13:07:06.017: INFO: Created: latency-svc-5rk52 Jun 19 13:07:06.029: INFO: Got endpoints: latency-svc-5rk52 [797.075552ms] Jun 19 13:07:06.057: INFO: Created: latency-svc-8phw9 Jun 19 13:07:06.076: INFO: Got endpoints: latency-svc-8phw9 [781.475008ms] Jun 19 13:07:06.099: INFO: Created: latency-svc-z2lpm Jun 19 13:07:06.147: INFO: Got endpoints: latency-svc-z2lpm [812.195235ms] Jun 19 13:07:06.173: INFO: Created: latency-svc-4vcmg Jun 19 13:07:06.185: INFO: Got endpoints: latency-svc-4vcmg [814.476692ms] Jun 19 13:07:06.203: INFO: Created: latency-svc-blnmv Jun 19 13:07:06.215: INFO: Got endpoints: latency-svc-blnmv [775.892846ms] Jun 19 13:07:06.233: INFO: Created: latency-svc-kspcr Jun 19 13:07:06.245: INFO: Got endpoints: latency-svc-kspcr [750.143672ms] Jun 19 13:07:06.327: INFO: Created: latency-svc-c4ct6 Jun 19 13:07:06.330: INFO: Got endpoints: latency-svc-c4ct6 [796.285447ms] Jun 19 13:07:06.357: INFO: Created: latency-svc-bnmb5 Jun 19 13:07:06.376: INFO: Got endpoints: latency-svc-bnmb5 [789.051209ms] Jun 19 13:07:06.454: INFO: Created: latency-svc-bmm9v Jun 19 13:07:06.513: INFO: Created: latency-svc-gn4vt Jun 19 13:07:06.513: INFO: Got endpoints: latency-svc-bmm9v [889.795838ms] Jun 19 13:07:06.524: INFO: Got endpoints: latency-svc-gn4vt [858.099535ms] Jun 19 13:07:06.598: INFO: Created: latency-svc-w9ldt Jun 19 13:07:06.607: INFO: Got endpoints: latency-svc-w9ldt [876.013413ms] Jun 19 13:07:06.654: INFO: Created: latency-svc-zfpbb Jun 19 13:07:06.668: INFO: Got endpoints: latency-svc-zfpbb [863.311349ms] Jun 19 13:07:06.692: INFO: Created: latency-svc-cg5rb Jun 19 13:07:06.728: INFO: Got endpoints: latency-svc-cg5rb [862.290406ms] Jun 19 13:07:06.740: INFO: Created: latency-svc-cz2nk Jun 19 13:07:06.752: INFO: Got endpoints: latency-svc-cz2nk [857.608049ms] Jun 19 13:07:06.770: INFO: Created: latency-svc-q5zsl Jun 19 13:07:06.797: INFO: Got endpoints: latency-svc-q5zsl [858.830237ms] Jun 19 13:07:06.827: INFO: Created: latency-svc-52p44 Jun 19 13:07:06.867: INFO: Got endpoints: latency-svc-52p44 [837.803724ms] Jun 19 13:07:06.881: INFO: Created: latency-svc-2744b Jun 19 13:07:06.896: INFO: Got endpoints: latency-svc-2744b [819.890444ms] Jun 19 13:07:06.914: INFO: Created: latency-svc-p75dk Jun 19 13:07:06.926: INFO: Got endpoints: latency-svc-p75dk [779.146646ms] Jun 19 13:07:06.950: INFO: Created: latency-svc-dwgp6 Jun 19 13:07:06.963: INFO: Got endpoints: latency-svc-dwgp6 [778.024621ms] Jun 19 13:07:07.028: INFO: Created: latency-svc-nmvfk Jun 19 13:07:07.055: INFO: Got endpoints: latency-svc-nmvfk [839.916424ms] Jun 19 13:07:07.091: INFO: Created: latency-svc-n9lq2 Jun 19 13:07:07.102: INFO: Got endpoints: latency-svc-n9lq2 [856.333493ms] Jun 19 13:07:07.119: INFO: Created: latency-svc-rnrgf Jun 19 13:07:07.153: INFO: Got endpoints: latency-svc-rnrgf [823.773302ms] Jun 19 13:07:07.160: INFO: Created: latency-svc-wjgjl Jun 19 13:07:07.177: INFO: Got endpoints: latency-svc-wjgjl [800.868886ms] Jun 19 13:07:07.208: INFO: Created: latency-svc-bq694 Jun 19 13:07:07.235: INFO: Got endpoints: latency-svc-bq694 [721.444965ms] Jun 19 13:07:07.298: INFO: Created: latency-svc-67zzp Jun 19 13:07:07.301: INFO: Got endpoints: latency-svc-67zzp [776.119284ms] Jun 19 13:07:07.325: INFO: Created: latency-svc-cr6cb Jun 19 13:07:07.340: INFO: Got endpoints: latency-svc-cr6cb [732.627745ms] Jun 19 13:07:07.370: INFO: Created: latency-svc-nkh89 Jun 19 13:07:07.385: INFO: Got endpoints: latency-svc-nkh89 [717.565645ms] Jun 19 13:07:07.453: INFO: Created: latency-svc-bcn4f Jun 19 13:07:07.457: INFO: Got endpoints: latency-svc-bcn4f [728.398451ms] Jun 19 13:07:07.507: INFO: Created: latency-svc-zcr5q Jun 19 13:07:07.511: INFO: Got endpoints: latency-svc-zcr5q [759.039858ms] Jun 19 13:07:07.603: INFO: Created: latency-svc-pr582 Jun 19 13:07:07.606: INFO: Got endpoints: latency-svc-pr582 [808.530576ms] Jun 19 13:07:07.647: INFO: Created: latency-svc-btfn8 Jun 19 13:07:07.669: INFO: Got endpoints: latency-svc-btfn8 [801.849843ms] Jun 19 13:07:07.685: INFO: Created: latency-svc-c9gsm Jun 19 13:07:07.699: INFO: Got endpoints: latency-svc-c9gsm [802.336477ms] Jun 19 13:07:07.758: INFO: Created: latency-svc-nhdqc Jun 19 13:07:07.784: INFO: Got endpoints: latency-svc-nhdqc [857.261246ms] Jun 19 13:07:07.844: INFO: Created: latency-svc-649nk Jun 19 13:07:07.855: INFO: Got endpoints: latency-svc-649nk [892.303085ms] Jun 19 13:07:07.908: INFO: Created: latency-svc-fbfh6 Jun 19 13:07:07.913: INFO: Got endpoints: latency-svc-fbfh6 [858.0468ms] Jun 19 13:07:07.937: INFO: Created: latency-svc-g4xn8 Jun 19 13:07:07.947: INFO: Got endpoints: latency-svc-g4xn8 [844.620839ms] Jun 19 13:07:07.967: INFO: Created: latency-svc-x7k6f Jun 19 13:07:08.075: INFO: Got endpoints: latency-svc-x7k6f [921.690298ms] Jun 19 13:07:08.090: INFO: Created: latency-svc-scf89 Jun 19 13:07:08.103: INFO: Got endpoints: latency-svc-scf89 [925.444355ms] Jun 19 13:07:08.171: INFO: Created: latency-svc-n4h7f Jun 19 13:07:08.219: INFO: Got endpoints: latency-svc-n4h7f [984.077376ms] Jun 19 13:07:08.231: INFO: Created: latency-svc-zdn6v Jun 19 13:07:08.241: INFO: Got endpoints: latency-svc-zdn6v [940.37468ms] Jun 19 13:07:08.258: INFO: Created: latency-svc-498t8 Jun 19 13:07:08.271: INFO: Got endpoints: latency-svc-498t8 [931.168196ms] Jun 19 13:07:08.288: INFO: Created: latency-svc-msk9h Jun 19 13:07:08.311: INFO: Got endpoints: latency-svc-msk9h [925.916087ms] Jun 19 13:07:08.399: INFO: Created: latency-svc-pqwsh Jun 19 13:07:08.401: INFO: Got endpoints: latency-svc-pqwsh [944.618278ms] Jun 19 13:07:08.441: INFO: Created: latency-svc-shv2l Jun 19 13:07:08.458: INFO: Got endpoints: latency-svc-shv2l [947.000849ms] Jun 19 13:07:08.477: INFO: Created: latency-svc-bgwst Jun 19 13:07:08.488: INFO: Got endpoints: latency-svc-bgwst [882.734213ms] Jun 19 13:07:08.536: INFO: Created: latency-svc-2glgf Jun 19 13:07:08.558: INFO: Got endpoints: latency-svc-2glgf [888.905542ms] Jun 19 13:07:08.615: INFO: Created: latency-svc-rshp7 Jun 19 13:07:08.692: INFO: Got endpoints: latency-svc-rshp7 [993.363521ms] Jun 19 13:07:08.711: INFO: Created: latency-svc-q9qg7 Jun 19 13:07:08.735: INFO: Got endpoints: latency-svc-q9qg7 [951.418857ms] Jun 19 13:07:08.787: INFO: Created: latency-svc-5dvfj Jun 19 13:07:08.830: INFO: Got endpoints: latency-svc-5dvfj [974.752667ms] Jun 19 13:07:08.834: INFO: Created: latency-svc-tnwmk Jun 19 13:07:08.851: INFO: Got endpoints: latency-svc-tnwmk [937.560436ms] Jun 19 13:07:08.876: INFO: Created: latency-svc-hcnvm Jun 19 13:07:08.902: INFO: Got endpoints: latency-svc-hcnvm [955.532163ms] Jun 19 13:07:08.927: INFO: Created: latency-svc-5f49g Jun 19 13:07:08.967: INFO: Got endpoints: latency-svc-5f49g [891.904286ms] Jun 19 13:07:08.981: INFO: Created: latency-svc-wzqnt Jun 19 13:07:08.995: INFO: Got endpoints: latency-svc-wzqnt [892.092986ms] Jun 19 13:07:09.050: INFO: Created: latency-svc-mpk27 Jun 19 13:07:09.061: INFO: Got endpoints: latency-svc-mpk27 [842.333451ms] Jun 19 13:07:09.117: INFO: Created: latency-svc-89nr2 Jun 19 13:07:09.156: INFO: Created: latency-svc-7qkzd Jun 19 13:07:09.156: INFO: Got endpoints: latency-svc-89nr2 [915.065167ms] Jun 19 13:07:09.190: INFO: Got endpoints: latency-svc-7qkzd [919.016561ms] Jun 19 13:07:09.249: INFO: Created: latency-svc-mgdt8 Jun 19 13:07:09.278: INFO: Got endpoints: latency-svc-mgdt8 [966.128728ms] Jun 19 13:07:09.278: INFO: Created: latency-svc-m5d4j Jun 19 13:07:09.290: INFO: Got endpoints: latency-svc-m5d4j [888.579243ms] Jun 19 13:07:09.326: INFO: Created: latency-svc-4cmbv Jun 19 13:07:09.338: INFO: Got endpoints: latency-svc-4cmbv [880.213691ms] Jun 19 13:07:09.393: INFO: Created: latency-svc-pxksf Jun 19 13:07:09.412: INFO: Got endpoints: latency-svc-pxksf [923.80048ms] Jun 19 13:07:09.446: INFO: Created: latency-svc-rmtr6 Jun 19 13:07:09.459: INFO: Got endpoints: latency-svc-rmtr6 [901.527263ms] Jun 19 13:07:09.475: INFO: Created: latency-svc-mk45m Jun 19 13:07:09.489: INFO: Got endpoints: latency-svc-mk45m [797.329867ms] Jun 19 13:07:09.537: INFO: Created: latency-svc-48tvc Jun 19 13:07:09.540: INFO: Got endpoints: latency-svc-48tvc [804.636239ms] Jun 19 13:07:09.568: INFO: Created: latency-svc-7b4w6 Jun 19 13:07:09.611: INFO: Got endpoints: latency-svc-7b4w6 [780.477031ms] Jun 19 13:07:09.688: INFO: Created: latency-svc-84wrg Jun 19 13:07:09.691: INFO: Got endpoints: latency-svc-84wrg [840.593865ms] Jun 19 13:07:09.754: INFO: Created: latency-svc-qp7sj Jun 19 13:07:09.771: INFO: Got endpoints: latency-svc-qp7sj [869.354833ms] Jun 19 13:07:09.830: INFO: Created: latency-svc-rss79 Jun 19 13:07:09.850: INFO: Got endpoints: latency-svc-rss79 [882.701641ms] Jun 19 13:07:09.881: INFO: Created: latency-svc-sdtds Jun 19 13:07:09.892: INFO: Got endpoints: latency-svc-sdtds [896.940973ms] Jun 19 13:07:09.913: INFO: Created: latency-svc-5bxqc Jun 19 13:07:09.967: INFO: Got endpoints: latency-svc-5bxqc [905.877068ms] Jun 19 13:07:09.975: INFO: Created: latency-svc-fnk8v Jun 19 13:07:09.988: INFO: Got endpoints: latency-svc-fnk8v [832.103046ms] Jun 19 13:07:10.018: INFO: Created: latency-svc-9qs88 Jun 19 13:07:10.031: INFO: Got endpoints: latency-svc-9qs88 [840.824224ms] Jun 19 13:07:10.054: INFO: Created: latency-svc-kdxz9 Jun 19 13:07:10.147: INFO: Got endpoints: latency-svc-kdxz9 [869.49819ms] Jun 19 13:07:10.150: INFO: Created: latency-svc-wbw4c Jun 19 13:07:10.171: INFO: Got endpoints: latency-svc-wbw4c [881.075494ms] Jun 19 13:07:10.196: INFO: Created: latency-svc-s9grj Jun 19 13:07:10.206: INFO: Got endpoints: latency-svc-s9grj [867.131949ms] Jun 19 13:07:10.229: INFO: Created: latency-svc-pz6zx Jun 19 13:07:10.242: INFO: Got endpoints: latency-svc-pz6zx [829.238264ms] Jun 19 13:07:10.303: INFO: Created: latency-svc-9hlx7 Jun 19 13:07:10.308: INFO: Got endpoints: latency-svc-9hlx7 [848.193782ms] Jun 19 13:07:10.333: INFO: Created: latency-svc-5fbpx Jun 19 13:07:10.350: INFO: Got endpoints: latency-svc-5fbpx [860.745693ms] Jun 19 13:07:10.382: INFO: Created: latency-svc-b5rdg Jun 19 13:07:10.435: INFO: Got endpoints: latency-svc-b5rdg [895.025699ms] Jun 19 13:07:10.456: INFO: Created: latency-svc-bfltr Jun 19 13:07:10.471: INFO: Got endpoints: latency-svc-bfltr [860.178703ms] Jun 19 13:07:10.498: INFO: Created: latency-svc-p6knv Jun 19 13:07:10.513: INFO: Got endpoints: latency-svc-p6knv [821.823472ms] Jun 19 13:07:10.602: INFO: Created: latency-svc-q7kxz Jun 19 13:07:10.627: INFO: Got endpoints: latency-svc-q7kxz [855.466143ms] Jun 19 13:07:10.628: INFO: Created: latency-svc-mthvc Jun 19 13:07:10.657: INFO: Got endpoints: latency-svc-mthvc [807.345537ms] Jun 19 13:07:10.681: INFO: Created: latency-svc-hjrqg Jun 19 13:07:10.734: INFO: Got endpoints: latency-svc-hjrqg [842.132952ms] Jun 19 13:07:10.763: INFO: Created: latency-svc-ctdp2 Jun 19 13:07:10.791: INFO: Got endpoints: latency-svc-ctdp2 [823.26001ms] Jun 19 13:07:10.831: INFO: Created: latency-svc-brvz6 Jun 19 13:07:10.878: INFO: Got endpoints: latency-svc-brvz6 [889.172562ms] Jun 19 13:07:10.880: INFO: Created: latency-svc-m4tnz Jun 19 13:07:10.893: INFO: Got endpoints: latency-svc-m4tnz [861.758708ms] Jun 19 13:07:10.921: INFO: Created: latency-svc-hdnmm Jun 19 13:07:10.948: INFO: Got endpoints: latency-svc-hdnmm [800.49804ms] Jun 19 13:07:11.016: INFO: Created: latency-svc-4fm9h Jun 19 13:07:11.019: INFO: Got endpoints: latency-svc-4fm9h [847.52818ms] Jun 19 13:07:11.062: INFO: Created: latency-svc-bxj95 Jun 19 13:07:11.075: INFO: Got endpoints: latency-svc-bxj95 [868.891474ms] Jun 19 13:07:11.095: INFO: Created: latency-svc-f6vk9 Jun 19 13:07:11.104: INFO: Got endpoints: latency-svc-f6vk9 [862.018193ms] Jun 19 13:07:11.178: INFO: Created: latency-svc-hnw4p Jun 19 13:07:11.181: INFO: Got endpoints: latency-svc-hnw4p [873.678364ms] Jun 19 13:07:11.218: INFO: Created: latency-svc-bnmtw Jun 19 13:07:11.230: INFO: Got endpoints: latency-svc-bnmtw [879.860418ms] Jun 19 13:07:11.254: INFO: Created: latency-svc-pxgq4 Jun 19 13:07:11.267: INFO: Got endpoints: latency-svc-pxgq4 [831.445954ms] Jun 19 13:07:11.327: INFO: Created: latency-svc-xztzq Jun 19 13:07:11.331: INFO: Got endpoints: latency-svc-xztzq [859.6695ms] Jun 19 13:07:11.359: INFO: Created: latency-svc-d7slt Jun 19 13:07:11.382: INFO: Got endpoints: latency-svc-d7slt [868.353979ms] Jun 19 13:07:11.425: INFO: Created: latency-svc-z874p Jun 19 13:07:11.496: INFO: Got endpoints: latency-svc-z874p [868.485209ms] Jun 19 13:07:11.518: INFO: Created: latency-svc-nck4n Jun 19 13:07:11.538: INFO: Got endpoints: latency-svc-nck4n [880.548517ms] Jun 19 13:07:11.594: INFO: Created: latency-svc-2zm5q Jun 19 13:07:11.632: INFO: Got endpoints: latency-svc-2zm5q [898.146398ms] Jun 19 13:07:11.634: INFO: Created: latency-svc-sshrp Jun 19 13:07:11.653: INFO: Got endpoints: latency-svc-sshrp [861.839364ms] Jun 19 13:07:11.680: INFO: Created: latency-svc-ccn55 Jun 19 13:07:11.695: INFO: Got endpoints: latency-svc-ccn55 [817.111335ms] Jun 19 13:07:11.717: INFO: Created: latency-svc-hknm9 Jun 19 13:07:11.731: INFO: Got endpoints: latency-svc-hknm9 [837.569737ms] Jun 19 13:07:11.788: INFO: Created: latency-svc-78gxn Jun 19 13:07:11.791: INFO: Got endpoints: latency-svc-78gxn [842.676466ms] Jun 19 13:07:11.863: INFO: Created: latency-svc-6qmgd Jun 19 13:07:11.875: INFO: Got endpoints: latency-svc-6qmgd [856.285164ms] Jun 19 13:07:11.926: INFO: Created: latency-svc-c65qf Jun 19 13:07:11.935: INFO: Got endpoints: latency-svc-c65qf [860.721658ms] Jun 19 13:07:11.956: INFO: Created: latency-svc-7dklh Jun 19 13:07:11.972: INFO: Got endpoints: latency-svc-7dklh [867.893404ms] Jun 19 13:07:12.019: INFO: Created: latency-svc-7gk6q Jun 19 13:07:12.070: INFO: Got endpoints: latency-svc-7gk6q [888.551179ms] Jun 19 13:07:12.073: INFO: Created: latency-svc-rrxcs Jun 19 13:07:12.086: INFO: Got endpoints: latency-svc-rrxcs [856.065816ms] Jun 19 13:07:12.130: INFO: Created: latency-svc-ttzwp Jun 19 13:07:12.147: INFO: Got endpoints: latency-svc-ttzwp [880.75568ms] Jun 19 13:07:12.208: INFO: Created: latency-svc-b52nt Jun 19 13:07:12.217: INFO: Got endpoints: latency-svc-b52nt [885.874471ms] Jun 19 13:07:12.247: INFO: Created: latency-svc-lb8rr Jun 19 13:07:12.261: INFO: Got endpoints: latency-svc-lb8rr [879.49712ms] Jun 19 13:07:12.283: INFO: Created: latency-svc-l2nr6 Jun 19 13:07:12.304: INFO: Got endpoints: latency-svc-l2nr6 [808.234767ms] Jun 19 13:07:12.358: INFO: Created: latency-svc-n8qwh Jun 19 13:07:12.370: INFO: Got endpoints: latency-svc-n8qwh [831.755053ms] Jun 19 13:07:12.400: INFO: Created: latency-svc-rxzpc Jun 19 13:07:12.424: INFO: Got endpoints: latency-svc-rxzpc [792.131374ms] Jun 19 13:07:12.507: INFO: Created: latency-svc-vkblz Jun 19 13:07:12.514: INFO: Got endpoints: latency-svc-vkblz [861.482573ms] Jun 19 13:07:12.535: INFO: Created: latency-svc-ff96n Jun 19 13:07:12.551: INFO: Got endpoints: latency-svc-ff96n [856.170852ms] Jun 19 13:07:12.580: INFO: Created: latency-svc-2c8lw Jun 19 13:07:12.593: INFO: Got endpoints: latency-svc-2c8lw [861.80518ms] Jun 19 13:07:12.644: INFO: Created: latency-svc-xq682 Jun 19 13:07:12.647: INFO: Got endpoints: latency-svc-xq682 [856.25508ms] Jun 19 13:07:12.691: INFO: Created: latency-svc-sqbp9 Jun 19 13:07:12.739: INFO: Got endpoints: latency-svc-sqbp9 [863.921738ms] Jun 19 13:07:12.819: INFO: Created: latency-svc-kjmx4 Jun 19 13:07:12.827: INFO: Got endpoints: latency-svc-kjmx4 [891.914731ms] Jun 19 13:07:12.850: INFO: Created: latency-svc-bxhv5 Jun 19 13:07:12.864: INFO: Got endpoints: latency-svc-bxhv5 [892.511725ms] Jun 19 13:07:12.880: INFO: Created: latency-svc-lv2gr Jun 19 13:07:12.894: INFO: Got endpoints: latency-svc-lv2gr [824.217126ms] Jun 19 13:07:12.913: INFO: Created: latency-svc-7mqbs Jun 19 13:07:12.955: INFO: Got endpoints: latency-svc-7mqbs [869.153927ms] Jun 19 13:07:12.960: INFO: Created: latency-svc-zwd7x Jun 19 13:07:12.979: INFO: Got endpoints: latency-svc-zwd7x [831.19824ms] Jun 19 13:07:13.009: INFO: Created: latency-svc-b9n6k Jun 19 13:07:13.021: INFO: Got endpoints: latency-svc-b9n6k [804.436971ms] Jun 19 13:07:13.042: INFO: Created: latency-svc-r8s2q Jun 19 13:07:13.075: INFO: Got endpoints: latency-svc-r8s2q [814.239794ms] Jun 19 13:07:13.102: INFO: Created: latency-svc-vzcj8 Jun 19 13:07:13.118: INFO: Got endpoints: latency-svc-vzcj8 [813.771028ms] Jun 19 13:07:13.138: INFO: Created: latency-svc-jbfdk Jun 19 13:07:13.154: INFO: Got endpoints: latency-svc-jbfdk [784.340551ms] Jun 19 13:07:13.171: INFO: Created: latency-svc-5zpk6 Jun 19 13:07:13.201: INFO: Got endpoints: latency-svc-5zpk6 [776.526808ms] Jun 19 13:07:13.224: INFO: Created: latency-svc-qs8j2 Jun 19 13:07:13.291: INFO: Got endpoints: latency-svc-qs8j2 [777.037742ms] Jun 19 13:07:13.345: INFO: Created: latency-svc-svrvm Jun 19 13:07:13.359: INFO: Got endpoints: latency-svc-svrvm [807.990951ms] Jun 19 13:07:13.381: INFO: Created: latency-svc-bqnlz Jun 19 13:07:13.395: INFO: Got endpoints: latency-svc-bqnlz [802.359745ms] Jun 19 13:07:13.414: INFO: Created: latency-svc-prf6m Jun 19 13:07:13.425: INFO: Got endpoints: latency-svc-prf6m [778.533154ms] Jun 19 13:07:13.478: INFO: Created: latency-svc-d658v Jun 19 13:07:13.507: INFO: Got endpoints: latency-svc-d658v [767.51762ms] Jun 19 13:07:13.537: INFO: Created: latency-svc-fmqkp Jun 19 13:07:13.552: INFO: Got endpoints: latency-svc-fmqkp [724.518325ms] Jun 19 13:07:13.573: INFO: Created: latency-svc-2fvb9 Jun 19 13:07:13.620: INFO: Got endpoints: latency-svc-2fvb9 [755.814402ms] Jun 19 13:07:13.622: INFO: Created: latency-svc-8k6kt Jun 19 13:07:13.643: INFO: Got endpoints: latency-svc-8k6kt [748.544746ms] Jun 19 13:07:13.671: INFO: Created: latency-svc-glxpx Jun 19 13:07:13.695: INFO: Got endpoints: latency-svc-glxpx [739.945717ms] Jun 19 13:07:13.764: INFO: Created: latency-svc-cg5xr Jun 19 13:07:13.775: INFO: Got endpoints: latency-svc-cg5xr [796.347661ms] Jun 19 13:07:13.801: INFO: Created: latency-svc-fsss9 Jun 19 13:07:13.818: INFO: Got endpoints: latency-svc-fsss9 [796.805622ms] Jun 19 13:07:13.861: INFO: Created: latency-svc-82z8w Jun 19 13:07:13.926: INFO: Got endpoints: latency-svc-82z8w [850.380251ms] Jun 19 13:07:13.960: INFO: Created: latency-svc-9d856 Jun 19 13:07:13.980: INFO: Got endpoints: latency-svc-9d856 [862.350975ms] Jun 19 13:07:14.002: INFO: Created: latency-svc-5qlz5 Jun 19 13:07:14.022: INFO: Got endpoints: latency-svc-5qlz5 [867.867023ms] Jun 19 13:07:14.076: INFO: Created: latency-svc-4mmfh Jun 19 13:07:14.089: INFO: Got endpoints: latency-svc-4mmfh [887.991347ms] Jun 19 13:07:14.146: INFO: Created: latency-svc-7h9v2 Jun 19 13:07:14.161: INFO: Got endpoints: latency-svc-7h9v2 [869.993771ms] Jun 19 13:07:14.225: INFO: Created: latency-svc-bbfpw Jun 19 13:07:14.239: INFO: Got endpoints: latency-svc-bbfpw [880.084106ms] Jun 19 13:07:14.274: INFO: Created: latency-svc-trdsq Jun 19 13:07:14.299: INFO: Got endpoints: latency-svc-trdsq [903.379314ms] Jun 19 13:07:14.322: INFO: Created: latency-svc-ndqr6 Jun 19 13:07:14.405: INFO: Got endpoints: latency-svc-ndqr6 [979.914689ms] Jun 19 13:07:14.434: INFO: Created: latency-svc-l97mc Jun 19 13:07:14.444: INFO: Got endpoints: latency-svc-l97mc [937.055218ms] Jun 19 13:07:14.464: INFO: Created: latency-svc-cp4zk Jun 19 13:07:14.480: INFO: Got endpoints: latency-svc-cp4zk [928.236491ms] Jun 19 13:07:14.561: INFO: Created: latency-svc-g5xvw Jun 19 13:07:14.564: INFO: Got endpoints: latency-svc-g5xvw [943.503122ms] Jun 19 13:07:14.593: INFO: Created: latency-svc-hbqz6 Jun 19 13:07:14.607: INFO: Got endpoints: latency-svc-hbqz6 [963.986997ms] Jun 19 13:07:14.625: INFO: Created: latency-svc-7br4s Jun 19 13:07:14.649: INFO: Got endpoints: latency-svc-7br4s [953.383757ms] Jun 19 13:07:14.717: INFO: Created: latency-svc-pdcd6 Jun 19 13:07:14.734: INFO: Got endpoints: latency-svc-pdcd6 [958.636523ms] Jun 19 13:07:14.779: INFO: Created: latency-svc-7rxh6 Jun 19 13:07:14.794: INFO: Got endpoints: latency-svc-7rxh6 [976.066165ms] Jun 19 13:07:14.814: INFO: Created: latency-svc-qdmrk Jun 19 13:07:14.860: INFO: Got endpoints: latency-svc-qdmrk [933.974782ms] Jun 19 13:07:14.883: INFO: Created: latency-svc-nq79s Jun 19 13:07:14.897: INFO: Got endpoints: latency-svc-nq79s [917.200168ms] Jun 19 13:07:14.913: INFO: Created: latency-svc-25czz Jun 19 13:07:14.937: INFO: Got endpoints: latency-svc-25czz [915.110526ms] Jun 19 13:07:15.004: INFO: Created: latency-svc-m6qff Jun 19 13:07:15.030: INFO: Got endpoints: latency-svc-m6qff [941.131574ms] Jun 19 13:07:15.031: INFO: Created: latency-svc-djq7h Jun 19 13:07:15.042: INFO: Got endpoints: latency-svc-djq7h [880.305556ms] Jun 19 13:07:15.061: INFO: Created: latency-svc-dqs5h Jun 19 13:07:15.072: INFO: Got endpoints: latency-svc-dqs5h [832.386393ms] Jun 19 13:07:15.093: INFO: Created: latency-svc-zfbcj Jun 19 13:07:15.135: INFO: Got endpoints: latency-svc-zfbcj [836.561467ms] Jun 19 13:07:15.147: INFO: Created: latency-svc-9c749 Jun 19 13:07:15.180: INFO: Got endpoints: latency-svc-9c749 [774.409749ms] Jun 19 13:07:15.222: INFO: Created: latency-svc-c2kmf Jun 19 13:07:15.233: INFO: Got endpoints: latency-svc-c2kmf [789.184148ms] Jun 19 13:07:15.286: INFO: Created: latency-svc-zscrx Jun 19 13:07:15.293: INFO: Got endpoints: latency-svc-zscrx [813.097479ms] Jun 19 13:07:15.315: INFO: Created: latency-svc-hj8xz Jun 19 13:07:15.329: INFO: Got endpoints: latency-svc-hj8xz [765.804844ms] Jun 19 13:07:15.351: INFO: Created: latency-svc-z4c6m Jun 19 13:07:15.366: INFO: Got endpoints: latency-svc-z4c6m [759.12791ms] Jun 19 13:07:15.435: INFO: Created: latency-svc-xc7fg Jun 19 13:07:15.444: INFO: Got endpoints: latency-svc-xc7fg [794.53507ms] Jun 19 13:07:15.474: INFO: Created: latency-svc-84x6x Jun 19 13:07:15.487: INFO: Got endpoints: latency-svc-84x6x [752.781467ms] Jun 19 13:07:15.510: INFO: Created: latency-svc-vjr89 Jun 19 13:07:15.523: INFO: Got endpoints: latency-svc-vjr89 [728.571424ms] Jun 19 13:07:15.579: INFO: Created: latency-svc-b2hfx Jun 19 13:07:15.582: INFO: Got endpoints: latency-svc-b2hfx [721.698322ms] Jun 19 13:07:15.609: INFO: Created: latency-svc-h78d8 Jun 19 13:07:15.625: INFO: Got endpoints: latency-svc-h78d8 [728.041547ms] Jun 19 13:07:15.645: INFO: Created: latency-svc-9r69z Jun 19 13:07:15.672: INFO: Got endpoints: latency-svc-9r69z [734.739029ms] Jun 19 13:07:15.728: INFO: Created: latency-svc-lzgqn Jun 19 13:07:15.734: INFO: Got endpoints: latency-svc-lzgqn [703.710551ms] Jun 19 13:07:15.757: INFO: Created: latency-svc-qxdw9 Jun 19 13:07:15.770: INFO: Got endpoints: latency-svc-qxdw9 [728.495413ms] Jun 19 13:07:15.789: INFO: Created: latency-svc-bl44x Jun 19 13:07:15.801: INFO: Got endpoints: latency-svc-bl44x [729.33743ms] Jun 19 13:07:15.825: INFO: Created: latency-svc-dgnxg Jun 19 13:07:15.872: INFO: Got endpoints: latency-svc-dgnxg [736.609705ms] Jun 19 13:07:15.895: INFO: Created: latency-svc-5wbj5 Jun 19 13:07:15.910: INFO: Got endpoints: latency-svc-5wbj5 [729.765874ms] Jun 19 13:07:15.930: INFO: Created: latency-svc-v9vhs Jun 19 13:07:16.034: INFO: Got endpoints: latency-svc-v9vhs [800.764947ms] Jun 19 13:07:16.035: INFO: Created: latency-svc-pflws Jun 19 13:07:16.042: INFO: Got endpoints: latency-svc-pflws [748.509081ms] Jun 19 13:07:16.111: INFO: Created: latency-svc-cfvz5 Jun 19 13:07:16.132: INFO: Got endpoints: latency-svc-cfvz5 [802.106732ms] Jun 19 13:07:16.200: INFO: Created: latency-svc-ggr99 Jun 19 13:07:16.222: INFO: Got endpoints: latency-svc-ggr99 [856.386045ms] Jun 19 13:07:16.303: INFO: Created: latency-svc-l8vj9 Jun 19 13:07:16.306: INFO: Got endpoints: latency-svc-l8vj9 [862.551105ms] Jun 19 13:07:16.306: INFO: Latencies: [55.801734ms 123.96309ms 162.912499ms 193.417272ms 256.161758ms 296.336488ms 331.804557ms 400.342883ms 456.58199ms 494.513858ms 548.302935ms 584.596374ms 627.545907ms 692.601487ms 703.710551ms 717.565645ms 721.444965ms 721.698322ms 724.518325ms 728.041547ms 728.398451ms 728.495413ms 728.571424ms 729.33743ms 729.765874ms 731.798252ms 732.627745ms 734.739029ms 736.566135ms 736.609705ms 739.945717ms 748.509081ms 748.544746ms 750.143672ms 752.781467ms 755.814402ms 759.039858ms 759.12791ms 765.804844ms 765.912924ms 767.51762ms 771.332846ms 774.409749ms 775.892846ms 776.119284ms 776.526808ms 777.037742ms 778.024621ms 778.533154ms 779.146646ms 780.477031ms 781.475008ms 784.340551ms 789.051209ms 789.184148ms 792.131374ms 794.53507ms 796.285447ms 796.347661ms 796.805622ms 797.075552ms 797.329867ms 800.49804ms 800.764947ms 800.868886ms 801.849843ms 802.106732ms 802.336477ms 802.359745ms 804.436971ms 804.636239ms 807.345537ms 807.990951ms 808.234767ms 808.530576ms 812.195235ms 813.097479ms 813.771028ms 814.239794ms 814.476692ms 817.111335ms 819.890444ms 821.823472ms 823.26001ms 823.773302ms 824.217126ms 829.238264ms 831.19824ms 831.445954ms 831.755053ms 832.103046ms 832.386393ms 836.561467ms 837.569737ms 837.803724ms 839.916424ms 840.593865ms 840.824224ms 842.132952ms 842.333451ms 842.676466ms 844.620839ms 847.52818ms 848.193782ms 850.380251ms 855.466143ms 856.065816ms 856.170852ms 856.25508ms 856.285164ms 856.333493ms 856.386045ms 857.261246ms 857.608049ms 858.0468ms 858.099535ms 858.830237ms 859.6695ms 860.178703ms 860.721658ms 860.745693ms 861.482573ms 861.758708ms 861.80518ms 861.839364ms 862.018193ms 862.290406ms 862.350975ms 862.551105ms 863.311349ms 863.921738ms 867.131949ms 867.867023ms 867.893404ms 868.353979ms 868.485209ms 868.891474ms 869.153927ms 869.354833ms 869.49819ms 869.993771ms 873.678364ms 876.013413ms 879.49712ms 879.860418ms 880.084106ms 880.213691ms 880.305556ms 880.548517ms 880.75568ms 881.075494ms 882.701641ms 882.734213ms 885.874471ms 887.991347ms 888.551179ms 888.579243ms 888.905542ms 889.172562ms 889.795838ms 891.904286ms 891.914731ms 892.092986ms 892.303085ms 892.511725ms 895.025699ms 896.940973ms 898.146398ms 901.527263ms 903.379314ms 905.877068ms 915.065167ms 915.110526ms 917.200168ms 919.016561ms 921.690298ms 923.80048ms 925.444355ms 925.916087ms 928.236491ms 931.168196ms 933.974782ms 937.055218ms 937.560436ms 940.37468ms 941.131574ms 943.503122ms 944.618278ms 947.000849ms 951.418857ms 953.383757ms 955.532163ms 958.636523ms 963.986997ms 966.128728ms 974.752667ms 976.066165ms 979.914689ms 984.077376ms 993.363521ms] Jun 19 13:07:16.306: INFO: 50 %ile: 842.676466ms Jun 19 13:07:16.306: INFO: 90 %ile: 931.168196ms Jun 19 13:07:16.307: INFO: 99 %ile: 984.077376ms Jun 19 13:07:16.307: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:07:16.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-4587" for this suite. Jun 19 13:07:38.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:07:38.455: INFO: namespace svc-latency-4587 deletion completed in 22.135668558s • [SLOW TEST:36.739 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:07:38.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-347c3538-9915-4001-8b11-f472de9ec690 STEP: Creating a pod to test consume configMaps Jun 19 13:07:38.590: INFO: Waiting up to 5m0s for pod "pod-configmaps-f0126e45-8a19-4626-9b35-790b6a4ac3fd" in namespace "configmap-4538" to be "success or failure" Jun 19 13:07:38.594: INFO: Pod "pod-configmaps-f0126e45-8a19-4626-9b35-790b6a4ac3fd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.764183ms Jun 19 13:07:40.669: INFO: Pod "pod-configmaps-f0126e45-8a19-4626-9b35-790b6a4ac3fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079295793s Jun 19 13:07:42.672: INFO: Pod "pod-configmaps-f0126e45-8a19-4626-9b35-790b6a4ac3fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082447183s STEP: Saw pod success Jun 19 13:07:42.672: INFO: Pod "pod-configmaps-f0126e45-8a19-4626-9b35-790b6a4ac3fd" satisfied condition "success or failure" Jun 19 13:07:42.675: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-f0126e45-8a19-4626-9b35-790b6a4ac3fd container configmap-volume-test: STEP: delete the pod Jun 19 13:07:42.718: INFO: Waiting for pod pod-configmaps-f0126e45-8a19-4626-9b35-790b6a4ac3fd to disappear Jun 19 13:07:42.722: INFO: Pod pod-configmaps-f0126e45-8a19-4626-9b35-790b6a4ac3fd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:07:42.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4538" for this suite. Jun 19 13:07:48.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:07:48.820: INFO: namespace configmap-4538 deletion completed in 6.094674888s • [SLOW TEST:10.364 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:07:48.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Jun 19 13:07:48.907: INFO: Waiting up to 5m0s for pod "var-expansion-9366dc9b-4e73-41ff-88b3-7025df0b9000" in namespace "var-expansion-1644" to be "success or failure" Jun 19 13:07:48.911: INFO: Pod "var-expansion-9366dc9b-4e73-41ff-88b3-7025df0b9000": Phase="Pending", Reason="", readiness=false. Elapsed: 4.198228ms Jun 19 13:07:50.992: INFO: Pod "var-expansion-9366dc9b-4e73-41ff-88b3-7025df0b9000": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085214476s Jun 19 13:07:53.004: INFO: Pod "var-expansion-9366dc9b-4e73-41ff-88b3-7025df0b9000": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.097155956s STEP: Saw pod success Jun 19 13:07:53.004: INFO: Pod "var-expansion-9366dc9b-4e73-41ff-88b3-7025df0b9000" satisfied condition "success or failure" Jun 19 13:07:53.007: INFO: Trying to get logs from node iruya-worker pod var-expansion-9366dc9b-4e73-41ff-88b3-7025df0b9000 container dapi-container: STEP: delete the pod Jun 19 13:07:53.027: INFO: Waiting for pod var-expansion-9366dc9b-4e73-41ff-88b3-7025df0b9000 to disappear Jun 19 13:07:53.046: INFO: Pod var-expansion-9366dc9b-4e73-41ff-88b3-7025df0b9000 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:07:53.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-1644" for this suite. Jun 19 13:07:59.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:07:59.143: INFO: namespace var-expansion-1644 deletion completed in 6.093124971s • [SLOW TEST:10.323 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:07:59.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-59cf3a2a-c9f6-4a51-8912-2d9fc8759928 STEP: Creating a pod to test consume configMaps Jun 19 13:07:59.225: INFO: Waiting up to 5m0s for pod "pod-configmaps-cbd13baa-9f69-496a-b2ee-72877347423e" in namespace "configmap-1390" to be "success or failure" Jun 19 13:07:59.255: INFO: Pod "pod-configmaps-cbd13baa-9f69-496a-b2ee-72877347423e": Phase="Pending", Reason="", readiness=false. Elapsed: 30.180234ms Jun 19 13:08:01.260: INFO: Pod "pod-configmaps-cbd13baa-9f69-496a-b2ee-72877347423e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034571041s Jun 19 13:08:03.264: INFO: Pod "pod-configmaps-cbd13baa-9f69-496a-b2ee-72877347423e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038550121s STEP: Saw pod success Jun 19 13:08:03.264: INFO: Pod "pod-configmaps-cbd13baa-9f69-496a-b2ee-72877347423e" satisfied condition "success or failure" Jun 19 13:08:03.267: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-cbd13baa-9f69-496a-b2ee-72877347423e container configmap-volume-test: STEP: delete the pod Jun 19 13:08:03.316: INFO: Waiting for pod pod-configmaps-cbd13baa-9f69-496a-b2ee-72877347423e to disappear Jun 19 13:08:03.328: INFO: Pod pod-configmaps-cbd13baa-9f69-496a-b2ee-72877347423e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:08:03.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1390" for this suite. Jun 19 13:08:09.344: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:08:09.451: INFO: namespace configmap-1390 deletion completed in 6.118876588s • [SLOW TEST:10.307 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:08:09.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 19 13:08:13.706: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:08:13.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2419" for this suite. Jun 19 13:08:19.799: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:08:19.880: INFO: namespace container-runtime-2419 deletion completed in 6.098607068s • [SLOW TEST:10.429 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:08:19.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jun 19 13:08:19.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2552' Jun 19 13:08:22.867: INFO: stderr: "" Jun 19 13:08:22.867: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 19 13:08:22.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2552' Jun 19 13:08:22.967: INFO: stderr: "" Jun 19 13:08:22.967: INFO: stdout: "update-demo-nautilus-tgpbw update-demo-nautilus-tllk4 " Jun 19 13:08:22.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgpbw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2552' Jun 19 13:08:23.060: INFO: stderr: "" Jun 19 13:08:23.060: INFO: stdout: "" Jun 19 13:08:23.060: INFO: update-demo-nautilus-tgpbw is created but not running Jun 19 13:08:28.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2552' Jun 19 13:08:28.172: INFO: stderr: "" Jun 19 13:08:28.172: INFO: stdout: "update-demo-nautilus-tgpbw update-demo-nautilus-tllk4 " Jun 19 13:08:28.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgpbw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2552' Jun 19 13:08:28.267: INFO: stderr: "" Jun 19 13:08:28.267: INFO: stdout: "true" Jun 19 13:08:28.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tgpbw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2552' Jun 19 13:08:28.352: INFO: stderr: "" Jun 19 13:08:28.352: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 19 13:08:28.352: INFO: validating pod update-demo-nautilus-tgpbw Jun 19 13:08:28.357: INFO: got data: { "image": "nautilus.jpg" } Jun 19 13:08:28.358: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 19 13:08:28.358: INFO: update-demo-nautilus-tgpbw is verified up and running Jun 19 13:08:28.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tllk4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2552' Jun 19 13:08:28.457: INFO: stderr: "" Jun 19 13:08:28.457: INFO: stdout: "true" Jun 19 13:08:28.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tllk4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2552' Jun 19 13:08:28.557: INFO: stderr: "" Jun 19 13:08:28.557: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 19 13:08:28.557: INFO: validating pod update-demo-nautilus-tllk4 Jun 19 13:08:28.570: INFO: got data: { "image": "nautilus.jpg" } Jun 19 13:08:28.570: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 19 13:08:28.570: INFO: update-demo-nautilus-tllk4 is verified up and running STEP: scaling down the replication controller Jun 19 13:08:28.575: INFO: scanned /root for discovery docs: Jun 19 13:08:28.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-2552' Jun 19 13:08:29.736: INFO: stderr: "" Jun 19 13:08:29.736: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 19 13:08:29.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2552' Jun 19 13:08:29.848: INFO: stderr: "" Jun 19 13:08:29.848: INFO: stdout: "update-demo-nautilus-tgpbw update-demo-nautilus-tllk4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 19 13:08:34.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2552' Jun 19 13:08:34.949: INFO: stderr: "" Jun 19 13:08:34.949: INFO: stdout: "update-demo-nautilus-tgpbw update-demo-nautilus-tllk4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 19 13:08:39.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2552' Jun 19 13:08:40.047: INFO: stderr: "" Jun 19 13:08:40.047: INFO: stdout: "update-demo-nautilus-tgpbw update-demo-nautilus-tllk4 " STEP: Replicas for name=update-demo: expected=1 actual=2 Jun 19 13:08:45.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2552' Jun 19 13:08:45.155: INFO: stderr: "" Jun 19 13:08:45.155: INFO: stdout: "update-demo-nautilus-tllk4 " Jun 19 13:08:45.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tllk4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2552' Jun 19 13:08:45.239: INFO: stderr: "" Jun 19 13:08:45.239: INFO: stdout: "true" Jun 19 13:08:45.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tllk4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2552' Jun 19 13:08:45.329: INFO: stderr: "" Jun 19 13:08:45.329: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 19 13:08:45.329: INFO: validating pod update-demo-nautilus-tllk4 Jun 19 13:08:45.332: INFO: got data: { "image": "nautilus.jpg" } Jun 19 13:08:45.332: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 19 13:08:45.332: INFO: update-demo-nautilus-tllk4 is verified up and running STEP: scaling up the replication controller Jun 19 13:08:45.335: INFO: scanned /root for discovery docs: Jun 19 13:08:45.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-2552' Jun 19 13:08:46.481: INFO: stderr: "" Jun 19 13:08:46.481: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 19 13:08:46.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2552' Jun 19 13:08:46.582: INFO: stderr: "" Jun 19 13:08:46.582: INFO: stdout: "update-demo-nautilus-hmxhr update-demo-nautilus-tllk4 " Jun 19 13:08:46.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hmxhr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2552' Jun 19 13:08:46.681: INFO: stderr: "" Jun 19 13:08:46.682: INFO: stdout: "" Jun 19 13:08:46.682: INFO: update-demo-nautilus-hmxhr is created but not running Jun 19 13:08:51.682: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2552' Jun 19 13:08:51.789: INFO: stderr: "" Jun 19 13:08:51.789: INFO: stdout: "update-demo-nautilus-hmxhr update-demo-nautilus-tllk4 " Jun 19 13:08:51.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hmxhr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2552' Jun 19 13:08:51.877: INFO: stderr: "" Jun 19 13:08:51.877: INFO: stdout: "true" Jun 19 13:08:51.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hmxhr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2552' Jun 19 13:08:51.966: INFO: stderr: "" Jun 19 13:08:51.966: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 19 13:08:51.966: INFO: validating pod update-demo-nautilus-hmxhr Jun 19 13:08:51.970: INFO: got data: { "image": "nautilus.jpg" } Jun 19 13:08:51.970: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 19 13:08:51.970: INFO: update-demo-nautilus-hmxhr is verified up and running Jun 19 13:08:51.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tllk4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2552' Jun 19 13:08:52.077: INFO: stderr: "" Jun 19 13:08:52.077: INFO: stdout: "true" Jun 19 13:08:52.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tllk4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2552' Jun 19 13:08:52.166: INFO: stderr: "" Jun 19 13:08:52.166: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 19 13:08:52.166: INFO: validating pod update-demo-nautilus-tllk4 Jun 19 13:08:52.169: INFO: got data: { "image": "nautilus.jpg" } Jun 19 13:08:52.169: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 19 13:08:52.169: INFO: update-demo-nautilus-tllk4 is verified up and running STEP: using delete to clean up resources Jun 19 13:08:52.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2552' Jun 19 13:08:52.271: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 19 13:08:52.271: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 19 13:08:52.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2552' Jun 19 13:08:52.364: INFO: stderr: "No resources found.\n" Jun 19 13:08:52.364: INFO: stdout: "" Jun 19 13:08:52.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2552 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 19 13:08:52.466: INFO: stderr: "" Jun 19 13:08:52.466: INFO: stdout: "update-demo-nautilus-hmxhr\nupdate-demo-nautilus-tllk4\n" Jun 19 13:08:52.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2552' Jun 19 13:08:53.073: INFO: stderr: "No resources found.\n" Jun 19 13:08:53.073: INFO: stdout: "" Jun 19 13:08:53.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2552 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 19 13:08:53.165: INFO: stderr: "" Jun 19 13:08:53.165: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:08:53.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2552" for this suite. Jun 19 13:09:15.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:09:15.480: INFO: namespace kubectl-2552 deletion completed in 22.312621114s • [SLOW TEST:55.600 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:09:15.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Jun 19 13:09:15.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Jun 19 13:09:15.712: INFO: stderr: "" Jun 19 13:09:15.712: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:09:15.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7637" for this suite. Jun 19 13:09:21.732: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:09:21.820: INFO: namespace kubectl-7637 deletion completed in 6.104087132s • [SLOW TEST:6.339 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:09:21.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 19 13:09:21.898: INFO: Waiting up to 5m0s for pod "pod-6bb14174-5270-4742-82b4-039f37a422d8" in namespace "emptydir-4008" to be "success or failure" Jun 19 13:09:21.902: INFO: Pod "pod-6bb14174-5270-4742-82b4-039f37a422d8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.976409ms Jun 19 13:09:23.906: INFO: Pod "pod-6bb14174-5270-4742-82b4-039f37a422d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008723492s Jun 19 13:09:25.909: INFO: Pod "pod-6bb14174-5270-4742-82b4-039f37a422d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011709145s STEP: Saw pod success Jun 19 13:09:25.909: INFO: Pod "pod-6bb14174-5270-4742-82b4-039f37a422d8" satisfied condition "success or failure" Jun 19 13:09:25.911: INFO: Trying to get logs from node iruya-worker2 pod pod-6bb14174-5270-4742-82b4-039f37a422d8 container test-container: STEP: delete the pod Jun 19 13:09:25.927: INFO: Waiting for pod pod-6bb14174-5270-4742-82b4-039f37a422d8 to disappear Jun 19 13:09:25.931: INFO: Pod pod-6bb14174-5270-4742-82b4-039f37a422d8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:09:25.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4008" for this suite. Jun 19 13:09:31.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:09:32.067: INFO: namespace emptydir-4008 deletion completed in 6.132754294s • [SLOW TEST:10.245 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:09:32.067: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 19 13:09:32.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-122' Jun 19 13:09:32.290: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 19 13:09:32.290: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Jun 19 13:09:32.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-122' Jun 19 13:09:32.461: INFO: stderr: "" Jun 19 13:09:32.461: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:09:32.461: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-122" for this suite. Jun 19 13:09:38.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:09:38.554: INFO: namespace kubectl-122 deletion completed in 6.089417346s • [SLOW TEST:6.487 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:09:38.555: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-b2cb89a4-7793-44a6-aa6c-ac7a883a281f in namespace container-probe-3201 Jun 19 13:09:42.634: INFO: Started pod busybox-b2cb89a4-7793-44a6-aa6c-ac7a883a281f in namespace container-probe-3201 STEP: checking the pod's current state and verifying that restartCount is present Jun 19 13:09:42.637: INFO: Initial restart count of pod busybox-b2cb89a4-7793-44a6-aa6c-ac7a883a281f is 0 Jun 19 13:10:38.802: INFO: Restart count of pod container-probe-3201/busybox-b2cb89a4-7793-44a6-aa6c-ac7a883a281f is now 1 (56.16459872s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:10:38.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3201" for this suite. Jun 19 13:10:44.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:10:44.947: INFO: namespace container-probe-3201 deletion completed in 6.126392292s • [SLOW TEST:66.393 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:10:44.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Jun 19 13:10:45.013: INFO: Pod name pod-release: Found 0 pods out of 1 Jun 19 13:10:50.018: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:10:51.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8998" for this suite. Jun 19 13:10:57.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:10:57.155: INFO: namespace replication-controller-8998 deletion completed in 6.109834461s • [SLOW TEST:12.208 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:10:57.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 19 13:11:01.919: INFO: Successfully updated pod "annotationupdate7d2db108-4ba2-473f-ac16-5ef366890c70" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:11:03.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5323" for this suite. Jun 19 13:11:25.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:11:26.048: INFO: namespace downward-api-5323 deletion completed in 22.093171507s • [SLOW TEST:28.892 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:11:26.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Jun 19 13:11:26.118: INFO: Waiting up to 5m0s for pod "pod-d6b02c2d-3acf-41da-b5a0-5951b9d397d3" in namespace "emptydir-7942" to be "success or failure" Jun 19 13:11:26.132: INFO: Pod "pod-d6b02c2d-3acf-41da-b5a0-5951b9d397d3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.704319ms Jun 19 13:11:28.247: INFO: Pod "pod-d6b02c2d-3acf-41da-b5a0-5951b9d397d3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.129424237s Jun 19 13:11:30.251: INFO: Pod "pod-d6b02c2d-3acf-41da-b5a0-5951b9d397d3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.133191149s STEP: Saw pod success Jun 19 13:11:30.251: INFO: Pod "pod-d6b02c2d-3acf-41da-b5a0-5951b9d397d3" satisfied condition "success or failure" Jun 19 13:11:30.254: INFO: Trying to get logs from node iruya-worker2 pod pod-d6b02c2d-3acf-41da-b5a0-5951b9d397d3 container test-container: STEP: delete the pod Jun 19 13:11:30.267: INFO: Waiting for pod pod-d6b02c2d-3acf-41da-b5a0-5951b9d397d3 to disappear Jun 19 13:11:30.271: INFO: Pod pod-d6b02c2d-3acf-41da-b5a0-5951b9d397d3 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:11:30.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7942" for this suite. Jun 19 13:11:36.323: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:11:36.408: INFO: namespace emptydir-7942 deletion completed in 6.134271725s • [SLOW TEST:10.360 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:11:36.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jun 19 13:11:36.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-773' Jun 19 13:11:36.873: INFO: stderr: "" Jun 19 13:11:36.873: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 19 13:11:37.877: INFO: Selector matched 1 pods for map[app:redis] Jun 19 13:11:37.877: INFO: Found 0 / 1 Jun 19 13:11:39.176: INFO: Selector matched 1 pods for map[app:redis] Jun 19 13:11:39.176: INFO: Found 0 / 1 Jun 19 13:11:39.877: INFO: Selector matched 1 pods for map[app:redis] Jun 19 13:11:39.877: INFO: Found 0 / 1 Jun 19 13:11:40.878: INFO: Selector matched 1 pods for map[app:redis] Jun 19 13:11:40.879: INFO: Found 1 / 1 Jun 19 13:11:40.879: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Jun 19 13:11:40.882: INFO: Selector matched 1 pods for map[app:redis] Jun 19 13:11:40.882: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 19 13:11:40.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-8xg84 --namespace=kubectl-773 -p {"metadata":{"annotations":{"x":"y"}}}' Jun 19 13:11:40.998: INFO: stderr: "" Jun 19 13:11:40.998: INFO: stdout: "pod/redis-master-8xg84 patched\n" STEP: checking annotations Jun 19 13:11:41.002: INFO: Selector matched 1 pods for map[app:redis] Jun 19 13:11:41.002: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:11:41.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-773" for this suite. Jun 19 13:12:03.030: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:12:03.118: INFO: namespace kubectl-773 deletion completed in 22.112623013s • [SLOW TEST:26.709 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:12:03.118: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-88a77359-0269-4769-a6a5-3f5b5ae7d0e9 STEP: Creating secret with name s-test-opt-upd-5170d757-81a4-4016-8dd0-8ad05e356804 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-88a77359-0269-4769-a6a5-3f5b5ae7d0e9 STEP: Updating secret s-test-opt-upd-5170d757-81a4-4016-8dd0-8ad05e356804 STEP: Creating secret with name s-test-opt-create-a15466bf-0d8f-45f7-95d0-a4cbadfc3a89 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:13:15.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-665" for this suite. Jun 19 13:13:37.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:13:37.792: INFO: namespace secrets-665 deletion completed in 22.156643708s • [SLOW TEST:94.674 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:13:37.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-zw25 STEP: Creating a pod to test atomic-volume-subpath Jun 19 13:13:37.926: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zw25" in namespace "subpath-2019" to be "success or failure" Jun 19 13:13:37.930: INFO: Pod "pod-subpath-test-configmap-zw25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.748637ms Jun 19 13:13:39.934: INFO: Pod "pod-subpath-test-configmap-zw25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008151841s Jun 19 13:13:41.938: INFO: Pod "pod-subpath-test-configmap-zw25": Phase="Running", Reason="", readiness=true. Elapsed: 4.012329383s Jun 19 13:13:43.942: INFO: Pod "pod-subpath-test-configmap-zw25": Phase="Running", Reason="", readiness=true. Elapsed: 6.016798697s Jun 19 13:13:45.947: INFO: Pod "pod-subpath-test-configmap-zw25": Phase="Running", Reason="", readiness=true. Elapsed: 8.021008748s Jun 19 13:13:47.950: INFO: Pod "pod-subpath-test-configmap-zw25": Phase="Running", Reason="", readiness=true. Elapsed: 10.024570447s Jun 19 13:13:49.954: INFO: Pod "pod-subpath-test-configmap-zw25": Phase="Running", Reason="", readiness=true. Elapsed: 12.028189657s Jun 19 13:13:51.958: INFO: Pod "pod-subpath-test-configmap-zw25": Phase="Running", Reason="", readiness=true. Elapsed: 14.032627143s Jun 19 13:13:53.962: INFO: Pod "pod-subpath-test-configmap-zw25": Phase="Running", Reason="", readiness=true. Elapsed: 16.036534777s Jun 19 13:13:55.967: INFO: Pod "pod-subpath-test-configmap-zw25": Phase="Running", Reason="", readiness=true. Elapsed: 18.040947777s Jun 19 13:13:57.971: INFO: Pod "pod-subpath-test-configmap-zw25": Phase="Running", Reason="", readiness=true. Elapsed: 20.045564887s Jun 19 13:13:59.976: INFO: Pod "pod-subpath-test-configmap-zw25": Phase="Running", Reason="", readiness=true. Elapsed: 22.050251695s Jun 19 13:14:01.980: INFO: Pod "pod-subpath-test-configmap-zw25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054462716s STEP: Saw pod success Jun 19 13:14:01.980: INFO: Pod "pod-subpath-test-configmap-zw25" satisfied condition "success or failure" Jun 19 13:14:01.983: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-zw25 container test-container-subpath-configmap-zw25: STEP: delete the pod Jun 19 13:14:02.008: INFO: Waiting for pod pod-subpath-test-configmap-zw25 to disappear Jun 19 13:14:02.011: INFO: Pod pod-subpath-test-configmap-zw25 no longer exists STEP: Deleting pod pod-subpath-test-configmap-zw25 Jun 19 13:14:02.011: INFO: Deleting pod "pod-subpath-test-configmap-zw25" in namespace "subpath-2019" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:14:02.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2019" for this suite. Jun 19 13:14:08.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:14:08.141: INFO: namespace subpath-2019 deletion completed in 6.12520725s • [SLOW TEST:30.349 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:14:08.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ad552350-2f53-45ed-84c0-e74260bdecc8 STEP: Creating a pod to test consume configMaps Jun 19 13:14:08.246: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8d53d4a4-9764-4767-8271-40feb49bc8cf" in namespace "projected-5075" to be "success or failure" Jun 19 13:14:08.250: INFO: Pod "pod-projected-configmaps-8d53d4a4-9764-4767-8271-40feb49bc8cf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.832142ms Jun 19 13:14:10.254: INFO: Pod "pod-projected-configmaps-8d53d4a4-9764-4767-8271-40feb49bc8cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008226871s Jun 19 13:14:12.259: INFO: Pod "pod-projected-configmaps-8d53d4a4-9764-4767-8271-40feb49bc8cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012861897s STEP: Saw pod success Jun 19 13:14:12.259: INFO: Pod "pod-projected-configmaps-8d53d4a4-9764-4767-8271-40feb49bc8cf" satisfied condition "success or failure" Jun 19 13:14:12.262: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-8d53d4a4-9764-4767-8271-40feb49bc8cf container projected-configmap-volume-test: STEP: delete the pod Jun 19 13:14:12.312: INFO: Waiting for pod pod-projected-configmaps-8d53d4a4-9764-4767-8271-40feb49bc8cf to disappear Jun 19 13:14:12.316: INFO: Pod pod-projected-configmaps-8d53d4a4-9764-4767-8271-40feb49bc8cf no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:14:12.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5075" for this suite. Jun 19 13:14:18.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:14:18.409: INFO: namespace projected-5075 deletion completed in 6.091013378s • [SLOW TEST:10.268 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:14:18.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 19 13:14:26.597: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 19 13:14:26.604: INFO: Pod pod-with-poststart-http-hook still exists Jun 19 13:14:28.604: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 19 13:14:28.608: INFO: Pod pod-with-poststart-http-hook still exists Jun 19 13:14:30.604: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 19 13:14:30.608: INFO: Pod pod-with-poststart-http-hook still exists Jun 19 13:14:32.604: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Jun 19 13:14:32.608: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:14:32.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3576" for this suite. Jun 19 13:14:54.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:14:54.727: INFO: namespace container-lifecycle-hook-3576 deletion completed in 22.113164573s • [SLOW TEST:36.317 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:14:54.728: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:14:59.807: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1001" for this suite. Jun 19 13:15:21.830: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:15:21.907: INFO: namespace replication-controller-1001 deletion completed in 22.096051558s • [SLOW TEST:27.180 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:15:21.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:15:27.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4206" for this suite. Jun 19 13:15:33.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:15:33.716: INFO: namespace watch-4206 deletion completed in 6.200924458s • [SLOW TEST:11.808 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:15:33.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-mn8x STEP: Creating a pod to test atomic-volume-subpath Jun 19 13:15:33.798: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mn8x" in namespace "subpath-5597" to be "success or failure" Jun 19 13:15:33.802: INFO: Pod "pod-subpath-test-configmap-mn8x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.216626ms Jun 19 13:15:35.806: INFO: Pod "pod-subpath-test-configmap-mn8x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008168775s Jun 19 13:15:37.811: INFO: Pod "pod-subpath-test-configmap-mn8x": Phase="Running", Reason="", readiness=true. Elapsed: 4.012738854s Jun 19 13:15:39.815: INFO: Pod "pod-subpath-test-configmap-mn8x": Phase="Running", Reason="", readiness=true. Elapsed: 6.017008194s Jun 19 13:15:41.819: INFO: Pod "pod-subpath-test-configmap-mn8x": Phase="Running", Reason="", readiness=true. Elapsed: 8.021228072s Jun 19 13:15:43.823: INFO: Pod "pod-subpath-test-configmap-mn8x": Phase="Running", Reason="", readiness=true. Elapsed: 10.024872816s Jun 19 13:15:45.827: INFO: Pod "pod-subpath-test-configmap-mn8x": Phase="Running", Reason="", readiness=true. Elapsed: 12.028942198s Jun 19 13:15:47.832: INFO: Pod "pod-subpath-test-configmap-mn8x": Phase="Running", Reason="", readiness=true. Elapsed: 14.033586992s Jun 19 13:15:49.837: INFO: Pod "pod-subpath-test-configmap-mn8x": Phase="Running", Reason="", readiness=true. Elapsed: 16.038798348s Jun 19 13:15:51.841: INFO: Pod "pod-subpath-test-configmap-mn8x": Phase="Running", Reason="", readiness=true. Elapsed: 18.042810139s Jun 19 13:15:53.845: INFO: Pod "pod-subpath-test-configmap-mn8x": Phase="Running", Reason="", readiness=true. Elapsed: 20.04656905s Jun 19 13:15:55.848: INFO: Pod "pod-subpath-test-configmap-mn8x": Phase="Running", Reason="", readiness=true. Elapsed: 22.05010246s Jun 19 13:15:57.853: INFO: Pod "pod-subpath-test-configmap-mn8x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054318427s STEP: Saw pod success Jun 19 13:15:57.853: INFO: Pod "pod-subpath-test-configmap-mn8x" satisfied condition "success or failure" Jun 19 13:15:57.855: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-mn8x container test-container-subpath-configmap-mn8x: STEP: delete the pod Jun 19 13:15:57.886: INFO: Waiting for pod pod-subpath-test-configmap-mn8x to disappear Jun 19 13:15:57.892: INFO: Pod pod-subpath-test-configmap-mn8x no longer exists STEP: Deleting pod pod-subpath-test-configmap-mn8x Jun 19 13:15:57.892: INFO: Deleting pod "pod-subpath-test-configmap-mn8x" in namespace "subpath-5597" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:15:57.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5597" for this suite. Jun 19 13:16:03.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:16:03.991: INFO: namespace subpath-5597 deletion completed in 6.076311945s • [SLOW TEST:30.275 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:16:03.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-f5c481fd-496d-43a3-b744-06400e0f5cef STEP: Creating configMap with name cm-test-opt-upd-e42909ae-2e58-4711-9c14-7c43b11de2e1 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f5c481fd-496d-43a3-b744-06400e0f5cef STEP: Updating configmap cm-test-opt-upd-e42909ae-2e58-4711-9c14-7c43b11de2e1 STEP: Creating configMap with name cm-test-opt-create-6da2a7fa-7af1-4c67-a357-8db82f25e109 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:16:12.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7931" for this suite. Jun 19 13:16:34.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:16:34.254: INFO: namespace projected-7931 deletion completed in 22.093600531s • [SLOW TEST:30.264 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:16:34.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Jun 19 13:16:38.841: INFO: Successfully updated pod "pod-update-activedeadlineseconds-ae740a02-d3ce-42da-ab9f-44c49363816b" Jun 19 13:16:38.841: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-ae740a02-d3ce-42da-ab9f-44c49363816b" in namespace "pods-6336" to be "terminated due to deadline exceeded" Jun 19 13:16:38.864: INFO: Pod "pod-update-activedeadlineseconds-ae740a02-d3ce-42da-ab9f-44c49363816b": Phase="Running", Reason="", readiness=true. Elapsed: 22.883342ms Jun 19 13:16:40.869: INFO: Pod "pod-update-activedeadlineseconds-ae740a02-d3ce-42da-ab9f-44c49363816b": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.028058559s Jun 19 13:16:40.869: INFO: Pod "pod-update-activedeadlineseconds-ae740a02-d3ce-42da-ab9f-44c49363816b" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:16:40.869: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6336" for this suite. Jun 19 13:16:46.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:16:46.976: INFO: namespace pods-6336 deletion completed in 6.102539975s • [SLOW TEST:12.721 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:16:46.977: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Jun 19 13:16:47.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6925 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jun 19 13:16:50.276: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0619 13:16:50.185082 1025 log.go:172] (0xc000a0e0b0) (0xc0003ce140) Create stream\nI0619 13:16:50.185322 1025 log.go:172] (0xc000a0e0b0) (0xc0003ce140) Stream added, broadcasting: 1\nI0619 13:16:50.188144 1025 log.go:172] (0xc000a0e0b0) Reply frame received for 1\nI0619 13:16:50.188180 1025 log.go:172] (0xc000a0e0b0) (0xc0003ce1e0) Create stream\nI0619 13:16:50.188189 1025 log.go:172] (0xc000a0e0b0) (0xc0003ce1e0) Stream added, broadcasting: 3\nI0619 13:16:50.189300 1025 log.go:172] (0xc000a0e0b0) Reply frame received for 3\nI0619 13:16:50.189361 1025 log.go:172] (0xc000a0e0b0) (0xc000420000) Create stream\nI0619 13:16:50.189383 1025 log.go:172] (0xc000a0e0b0) (0xc000420000) Stream added, broadcasting: 5\nI0619 13:16:50.190456 1025 log.go:172] (0xc000a0e0b0) Reply frame received for 5\nI0619 13:16:50.190477 1025 log.go:172] (0xc000a0e0b0) (0xc0001fc5a0) Create stream\nI0619 13:16:50.190483 1025 log.go:172] (0xc000a0e0b0) (0xc0001fc5a0) Stream added, broadcasting: 7\nI0619 13:16:50.191546 1025 log.go:172] (0xc000a0e0b0) Reply frame received for 7\nI0619 13:16:50.191693 1025 log.go:172] (0xc0003ce1e0) (3) Writing data frame\nI0619 13:16:50.191784 1025 log.go:172] (0xc0003ce1e0) (3) Writing data frame\nI0619 13:16:50.192677 1025 log.go:172] (0xc000a0e0b0) Data frame received for 5\nI0619 13:16:50.192689 1025 log.go:172] (0xc000420000) (5) Data frame handling\nI0619 13:16:50.192694 1025 log.go:172] (0xc000420000) (5) Data frame sent\nI0619 13:16:50.193441 1025 log.go:172] (0xc000a0e0b0) Data frame received for 5\nI0619 13:16:50.193464 1025 log.go:172] (0xc000420000) (5) Data frame handling\nI0619 13:16:50.193483 1025 log.go:172] (0xc000420000) (5) Data frame sent\nI0619 13:16:50.251420 1025 log.go:172] (0xc000a0e0b0) Data frame received for 5\nI0619 13:16:50.251459 1025 log.go:172] (0xc000420000) (5) Data frame handling\nI0619 13:16:50.251808 1025 log.go:172] (0xc000a0e0b0) Data frame received for 1\nI0619 13:16:50.251837 1025 log.go:172] (0xc0003ce140) (1) Data frame handling\nI0619 13:16:50.251865 1025 log.go:172] (0xc000a0e0b0) Data frame received for 7\nI0619 13:16:50.251896 1025 log.go:172] (0xc0001fc5a0) (7) Data frame handling\nI0619 13:16:50.251919 1025 log.go:172] (0xc0003ce140) (1) Data frame sent\nI0619 13:16:50.252106 1025 log.go:172] (0xc000a0e0b0) (0xc0003ce140) Stream removed, broadcasting: 1\nI0619 13:16:50.252417 1025 log.go:172] (0xc000a0e0b0) (0xc0003ce140) Stream removed, broadcasting: 1\nI0619 13:16:50.252445 1025 log.go:172] (0xc000a0e0b0) (0xc0003ce1e0) Stream removed, broadcasting: 3\nI0619 13:16:50.252855 1025 log.go:172] (0xc000a0e0b0) (0xc000420000) Stream removed, broadcasting: 5\nI0619 13:16:50.252924 1025 log.go:172] (0xc000a0e0b0) Go away received\nI0619 13:16:50.253513 1025 log.go:172] (0xc000a0e0b0) (0xc0001fc5a0) Stream removed, broadcasting: 7\n" Jun 19 13:16:50.276: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:16:52.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6925" for this suite. Jun 19 13:16:58.299: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:16:58.414: INFO: namespace kubectl-6925 deletion completed in 6.128102305s • [SLOW TEST:11.436 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:16:58.415: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Jun 19 13:16:58.529: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6539,SelfLink:/api/v1/namespaces/watch-6539/configmaps/e2e-watch-test-label-changed,UID:82e12aa3-7272-4fd2-8807-df77bf29cf2b,ResourceVersion:17316820,Generation:0,CreationTimestamp:2020-06-19 13:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 19 13:16:58.529: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6539,SelfLink:/api/v1/namespaces/watch-6539/configmaps/e2e-watch-test-label-changed,UID:82e12aa3-7272-4fd2-8807-df77bf29cf2b,ResourceVersion:17316821,Generation:0,CreationTimestamp:2020-06-19 13:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 19 13:16:58.529: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6539,SelfLink:/api/v1/namespaces/watch-6539/configmaps/e2e-watch-test-label-changed,UID:82e12aa3-7272-4fd2-8807-df77bf29cf2b,ResourceVersion:17316822,Generation:0,CreationTimestamp:2020-06-19 13:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Jun 19 13:17:08.586: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6539,SelfLink:/api/v1/namespaces/watch-6539/configmaps/e2e-watch-test-label-changed,UID:82e12aa3-7272-4fd2-8807-df77bf29cf2b,ResourceVersion:17316844,Generation:0,CreationTimestamp:2020-06-19 13:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 19 13:17:08.586: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6539,SelfLink:/api/v1/namespaces/watch-6539/configmaps/e2e-watch-test-label-changed,UID:82e12aa3-7272-4fd2-8807-df77bf29cf2b,ResourceVersion:17316845,Generation:0,CreationTimestamp:2020-06-19 13:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Jun 19 13:17:08.586: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-6539,SelfLink:/api/v1/namespaces/watch-6539/configmaps/e2e-watch-test-label-changed,UID:82e12aa3-7272-4fd2-8807-df77bf29cf2b,ResourceVersion:17316846,Generation:0,CreationTimestamp:2020-06-19 13:16:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:17:08.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6539" for this suite. Jun 19 13:17:14.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:17:14.678: INFO: namespace watch-6539 deletion completed in 6.08746432s • [SLOW TEST:16.263 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:17:14.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 13:17:14.776: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0461d6ce-75bd-42b0-ae0a-e117a16ac5df" in namespace "downward-api-8453" to be "success or failure" Jun 19 13:17:14.782: INFO: Pod "downwardapi-volume-0461d6ce-75bd-42b0-ae0a-e117a16ac5df": Phase="Pending", Reason="", readiness=false. Elapsed: 5.635677ms Jun 19 13:17:16.785: INFO: Pod "downwardapi-volume-0461d6ce-75bd-42b0-ae0a-e117a16ac5df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009469635s Jun 19 13:17:18.790: INFO: Pod "downwardapi-volume-0461d6ce-75bd-42b0-ae0a-e117a16ac5df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013945118s STEP: Saw pod success Jun 19 13:17:18.790: INFO: Pod "downwardapi-volume-0461d6ce-75bd-42b0-ae0a-e117a16ac5df" satisfied condition "success or failure" Jun 19 13:17:18.793: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-0461d6ce-75bd-42b0-ae0a-e117a16ac5df container client-container: STEP: delete the pod Jun 19 13:17:18.877: INFO: Waiting for pod downwardapi-volume-0461d6ce-75bd-42b0-ae0a-e117a16ac5df to disappear Jun 19 13:17:18.883: INFO: Pod downwardapi-volume-0461d6ce-75bd-42b0-ae0a-e117a16ac5df no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:17:18.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8453" for this suite. Jun 19 13:17:24.911: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:17:25.002: INFO: namespace downward-api-8453 deletion completed in 6.114125032s • [SLOW TEST:10.323 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:17:25.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 13:17:25.075: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:17:29.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6157" for this suite. Jun 19 13:18:15.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:18:15.241: INFO: namespace pods-6157 deletion completed in 46.12222987s • [SLOW TEST:50.239 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:18:15.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:18:15.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4189" for this suite. Jun 19 13:18:21.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:18:21.555: INFO: namespace kubelet-test-4189 deletion completed in 6.096933331s • [SLOW TEST:6.314 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:18:21.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 13:18:21.664: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f08e76da-6a14-4738-b841-958b647e6931" in namespace "downward-api-3369" to be "success or failure" Jun 19 13:18:21.681: INFO: Pod "downwardapi-volume-f08e76da-6a14-4738-b841-958b647e6931": Phase="Pending", Reason="", readiness=false. Elapsed: 17.327073ms Jun 19 13:18:23.696: INFO: Pod "downwardapi-volume-f08e76da-6a14-4738-b841-958b647e6931": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032708822s Jun 19 13:18:25.727: INFO: Pod "downwardapi-volume-f08e76da-6a14-4738-b841-958b647e6931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063075611s STEP: Saw pod success Jun 19 13:18:25.727: INFO: Pod "downwardapi-volume-f08e76da-6a14-4738-b841-958b647e6931" satisfied condition "success or failure" Jun 19 13:18:25.730: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-f08e76da-6a14-4738-b841-958b647e6931 container client-container: STEP: delete the pod Jun 19 13:18:25.800: INFO: Waiting for pod downwardapi-volume-f08e76da-6a14-4738-b841-958b647e6931 to disappear Jun 19 13:18:25.805: INFO: Pod downwardapi-volume-f08e76da-6a14-4738-b841-958b647e6931 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:18:25.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3369" for this suite. Jun 19 13:18:31.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:18:31.896: INFO: namespace downward-api-3369 deletion completed in 6.086666232s • [SLOW TEST:10.340 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:18:31.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-3d0d58b7-5fc9-4dd1-adb5-a9fbf67b8e68 STEP: Creating a pod to test consume secrets Jun 19 13:18:32.043: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-65ee6f68-cea1-4033-9908-da2697b81982" in namespace "projected-1675" to be "success or failure" Jun 19 13:18:32.080: INFO: Pod "pod-projected-secrets-65ee6f68-cea1-4033-9908-da2697b81982": Phase="Pending", Reason="", readiness=false. Elapsed: 36.268906ms Jun 19 13:18:34.218: INFO: Pod "pod-projected-secrets-65ee6f68-cea1-4033-9908-da2697b81982": Phase="Pending", Reason="", readiness=false. Elapsed: 2.175090396s Jun 19 13:18:36.223: INFO: Pod "pod-projected-secrets-65ee6f68-cea1-4033-9908-da2697b81982": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.179685726s STEP: Saw pod success Jun 19 13:18:36.223: INFO: Pod "pod-projected-secrets-65ee6f68-cea1-4033-9908-da2697b81982" satisfied condition "success or failure" Jun 19 13:18:36.226: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-65ee6f68-cea1-4033-9908-da2697b81982 container projected-secret-volume-test: STEP: delete the pod Jun 19 13:18:36.259: INFO: Waiting for pod pod-projected-secrets-65ee6f68-cea1-4033-9908-da2697b81982 to disappear Jun 19 13:18:36.272: INFO: Pod pod-projected-secrets-65ee6f68-cea1-4033-9908-da2697b81982 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:18:36.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1675" for this suite. Jun 19 13:18:42.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:18:42.361: INFO: namespace projected-1675 deletion completed in 6.084123588s • [SLOW TEST:10.464 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:18:42.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Jun 19 13:18:46.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-106f6406-99dc-4007-852f-39ac3fdad0d7 -c busybox-main-container --namespace=emptydir-5931 -- cat /usr/share/volumeshare/shareddata.txt' Jun 19 13:18:49.278: INFO: stderr: "I0619 13:18:49.176448 1049 log.go:172] (0xc000bb44d0) (0xc0003e28c0) Create stream\nI0619 13:18:49.176491 1049 log.go:172] (0xc000bb44d0) (0xc0003e28c0) Stream added, broadcasting: 1\nI0619 13:18:49.179033 1049 log.go:172] (0xc000bb44d0) Reply frame received for 1\nI0619 13:18:49.179090 1049 log.go:172] (0xc000bb44d0) (0xc0009a2000) Create stream\nI0619 13:18:49.179112 1049 log.go:172] (0xc000bb44d0) (0xc0009a2000) Stream added, broadcasting: 3\nI0619 13:18:49.180427 1049 log.go:172] (0xc000bb44d0) Reply frame received for 3\nI0619 13:18:49.180481 1049 log.go:172] (0xc000bb44d0) (0xc000a30000) Create stream\nI0619 13:18:49.180500 1049 log.go:172] (0xc000bb44d0) (0xc000a30000) Stream added, broadcasting: 5\nI0619 13:18:49.182154 1049 log.go:172] (0xc000bb44d0) Reply frame received for 5\nI0619 13:18:49.263755 1049 log.go:172] (0xc000bb44d0) Data frame received for 3\nI0619 13:18:49.263796 1049 log.go:172] (0xc0009a2000) (3) Data frame handling\nI0619 13:18:49.263808 1049 log.go:172] (0xc0009a2000) (3) Data frame sent\nI0619 13:18:49.263817 1049 log.go:172] (0xc000bb44d0) Data frame received for 3\nI0619 13:18:49.263825 1049 log.go:172] (0xc0009a2000) (3) Data frame handling\nI0619 13:18:49.263863 1049 log.go:172] (0xc000bb44d0) Data frame received for 5\nI0619 13:18:49.263873 1049 log.go:172] (0xc000a30000) (5) Data frame handling\nI0619 13:18:49.265471 1049 log.go:172] (0xc000bb44d0) Data frame received for 1\nI0619 13:18:49.265499 1049 log.go:172] (0xc0003e28c0) (1) Data frame handling\nI0619 13:18:49.265525 1049 log.go:172] (0xc0003e28c0) (1) Data frame sent\nI0619 13:18:49.265636 1049 log.go:172] (0xc000bb44d0) (0xc0003e28c0) Stream removed, broadcasting: 1\nI0619 13:18:49.265720 1049 log.go:172] (0xc000bb44d0) Go away received\nI0619 13:18:49.268121 1049 log.go:172] (0xc000bb44d0) (0xc0003e28c0) Stream removed, broadcasting: 1\nI0619 13:18:49.268152 1049 log.go:172] (0xc000bb44d0) (0xc0009a2000) Stream removed, broadcasting: 3\nI0619 13:18:49.268163 1049 log.go:172] (0xc000bb44d0) (0xc000a30000) Stream removed, broadcasting: 5\n" Jun 19 13:18:49.278: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:18:49.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5931" for this suite. Jun 19 13:18:55.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:18:55.402: INFO: namespace emptydir-5931 deletion completed in 6.121022863s • [SLOW TEST:13.041 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:18:55.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-8929/secret-test-033fd3d1-c1ef-450d-8d67-1bac6d0d41bf STEP: Creating a pod to test consume secrets Jun 19 13:18:55.456: INFO: Waiting up to 5m0s for pod "pod-configmaps-d62ab59d-efe5-4b50-9482-f56f69f74f71" in namespace "secrets-8929" to be "success or failure" Jun 19 13:18:55.466: INFO: Pod "pod-configmaps-d62ab59d-efe5-4b50-9482-f56f69f74f71": Phase="Pending", Reason="", readiness=false. Elapsed: 10.361908ms Jun 19 13:18:57.469: INFO: Pod "pod-configmaps-d62ab59d-efe5-4b50-9482-f56f69f74f71": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013317245s Jun 19 13:18:59.474: INFO: Pod "pod-configmaps-d62ab59d-efe5-4b50-9482-f56f69f74f71": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018218997s STEP: Saw pod success Jun 19 13:18:59.474: INFO: Pod "pod-configmaps-d62ab59d-efe5-4b50-9482-f56f69f74f71" satisfied condition "success or failure" Jun 19 13:18:59.476: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d62ab59d-efe5-4b50-9482-f56f69f74f71 container env-test: STEP: delete the pod Jun 19 13:18:59.513: INFO: Waiting for pod pod-configmaps-d62ab59d-efe5-4b50-9482-f56f69f74f71 to disappear Jun 19 13:18:59.520: INFO: Pod pod-configmaps-d62ab59d-efe5-4b50-9482-f56f69f74f71 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:18:59.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8929" for this suite. Jun 19 13:19:05.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:19:05.624: INFO: namespace secrets-8929 deletion completed in 6.100648836s • [SLOW TEST:10.221 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:19:05.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7549 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-7549 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7549 Jun 19 13:19:05.727: INFO: Found 0 stateful pods, waiting for 1 Jun 19 13:19:15.733: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jun 19 13:19:15.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 19 13:19:16.129: INFO: stderr: "I0619 13:19:15.953707 1084 log.go:172] (0xc00099e4d0) (0xc000724780) Create stream\nI0619 13:19:15.953741 1084 log.go:172] (0xc00099e4d0) (0xc000724780) Stream added, broadcasting: 1\nI0619 13:19:15.956485 1084 log.go:172] (0xc00099e4d0) Reply frame received for 1\nI0619 13:19:15.956536 1084 log.go:172] (0xc00099e4d0) (0xc000423ae0) Create stream\nI0619 13:19:15.956567 1084 log.go:172] (0xc00099e4d0) (0xc000423ae0) Stream added, broadcasting: 3\nI0619 13:19:15.957734 1084 log.go:172] (0xc00099e4d0) Reply frame received for 3\nI0619 13:19:15.957765 1084 log.go:172] (0xc00099e4d0) (0xc000724000) Create stream\nI0619 13:19:15.957779 1084 log.go:172] (0xc00099e4d0) (0xc000724000) Stream added, broadcasting: 5\nI0619 13:19:15.958443 1084 log.go:172] (0xc00099e4d0) Reply frame received for 5\nI0619 13:19:16.070591 1084 log.go:172] (0xc00099e4d0) Data frame received for 5\nI0619 13:19:16.070620 1084 log.go:172] (0xc000724000) (5) Data frame handling\nI0619 13:19:16.070635 1084 log.go:172] (0xc000724000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0619 13:19:16.119806 1084 log.go:172] (0xc00099e4d0) Data frame received for 3\nI0619 13:19:16.119905 1084 log.go:172] (0xc000423ae0) (3) Data frame handling\nI0619 13:19:16.120046 1084 log.go:172] (0xc000423ae0) (3) Data frame sent\nI0619 13:19:16.120250 1084 log.go:172] (0xc00099e4d0) Data frame received for 5\nI0619 13:19:16.120304 1084 log.go:172] (0xc000724000) (5) Data frame handling\nI0619 13:19:16.120337 1084 log.go:172] (0xc00099e4d0) Data frame received for 3\nI0619 13:19:16.120354 1084 log.go:172] (0xc000423ae0) (3) Data frame handling\nI0619 13:19:16.122773 1084 log.go:172] (0xc00099e4d0) Data frame received for 1\nI0619 13:19:16.122796 1084 log.go:172] (0xc000724780) (1) Data frame handling\nI0619 13:19:16.122814 1084 log.go:172] (0xc000724780) (1) Data frame sent\nI0619 13:19:16.123050 1084 log.go:172] (0xc00099e4d0) (0xc000724780) Stream removed, broadcasting: 1\nI0619 13:19:16.123415 1084 log.go:172] (0xc00099e4d0) Go away received\nI0619 13:19:16.123561 1084 log.go:172] (0xc00099e4d0) (0xc000724780) Stream removed, broadcasting: 1\nI0619 13:19:16.123584 1084 log.go:172] (0xc00099e4d0) (0xc000423ae0) Stream removed, broadcasting: 3\nI0619 13:19:16.123595 1084 log.go:172] (0xc00099e4d0) (0xc000724000) Stream removed, broadcasting: 5\n" Jun 19 13:19:16.129: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 19 13:19:16.129: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 19 13:19:16.133: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 19 13:19:26.137: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 19 13:19:26.137: INFO: Waiting for statefulset status.replicas updated to 0 Jun 19 13:19:26.149: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 13:19:26.149: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC }] Jun 19 13:19:26.149: INFO: Jun 19 13:19:26.149: INFO: StatefulSet ss has not reached scale 3, at 1 Jun 19 13:19:27.155: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994470464s Jun 19 13:19:28.160: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988817642s Jun 19 13:19:29.231: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983469537s Jun 19 13:19:30.236: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.912692066s Jun 19 13:19:31.241: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.907693082s Jun 19 13:19:32.247: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.902396491s Jun 19 13:19:33.252: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.897118675s Jun 19 13:19:34.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.891973454s Jun 19 13:19:35.269: INFO: Verifying statefulset ss doesn't scale past 3 for another 879.731746ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7549 Jun 19 13:19:36.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:19:36.502: INFO: stderr: "I0619 13:19:36.405884 1107 log.go:172] (0xc00013adc0) (0xc00077a8c0) Create stream\nI0619 13:19:36.405939 1107 log.go:172] (0xc00013adc0) (0xc00077a8c0) Stream added, broadcasting: 1\nI0619 13:19:36.408174 1107 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0619 13:19:36.408211 1107 log.go:172] (0xc00013adc0) (0xc000828000) Create stream\nI0619 13:19:36.408228 1107 log.go:172] (0xc00013adc0) (0xc000828000) Stream added, broadcasting: 3\nI0619 13:19:36.408984 1107 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0619 13:19:36.409013 1107 log.go:172] (0xc00013adc0) (0xc0008f4000) Create stream\nI0619 13:19:36.409023 1107 log.go:172] (0xc00013adc0) (0xc0008f4000) Stream added, broadcasting: 5\nI0619 13:19:36.410077 1107 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0619 13:19:36.494508 1107 log.go:172] (0xc00013adc0) Data frame received for 3\nI0619 13:19:36.494542 1107 log.go:172] (0xc000828000) (3) Data frame handling\nI0619 13:19:36.494583 1107 log.go:172] (0xc00013adc0) Data frame received for 5\nI0619 13:19:36.494619 1107 log.go:172] (0xc0008f4000) (5) Data frame handling\nI0619 13:19:36.494635 1107 log.go:172] (0xc0008f4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0619 13:19:36.494650 1107 log.go:172] (0xc000828000) (3) Data frame sent\nI0619 13:19:36.494675 1107 log.go:172] (0xc00013adc0) Data frame received for 3\nI0619 13:19:36.494683 1107 log.go:172] (0xc000828000) (3) Data frame handling\nI0619 13:19:36.494700 1107 log.go:172] (0xc00013adc0) Data frame received for 5\nI0619 13:19:36.494707 1107 log.go:172] (0xc0008f4000) (5) Data frame handling\nI0619 13:19:36.495862 1107 log.go:172] (0xc00013adc0) Data frame received for 1\nI0619 13:19:36.495890 1107 log.go:172] (0xc00077a8c0) (1) Data frame handling\nI0619 13:19:36.495903 1107 log.go:172] (0xc00077a8c0) (1) Data frame sent\nI0619 13:19:36.495914 1107 log.go:172] (0xc00013adc0) (0xc00077a8c0) Stream removed, broadcasting: 1\nI0619 13:19:36.495931 1107 log.go:172] (0xc00013adc0) Go away received\nI0619 13:19:36.496525 1107 log.go:172] (0xc00013adc0) (0xc00077a8c0) Stream removed, broadcasting: 1\nI0619 13:19:36.496553 1107 log.go:172] (0xc00013adc0) (0xc000828000) Stream removed, broadcasting: 3\nI0619 13:19:36.496565 1107 log.go:172] (0xc00013adc0) (0xc0008f4000) Stream removed, broadcasting: 5\n" Jun 19 13:19:36.502: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 19 13:19:36.502: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 19 13:19:36.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:19:36.747: INFO: stderr: "I0619 13:19:36.634799 1128 log.go:172] (0xc0006f2630) (0xc000622aa0) Create stream\nI0619 13:19:36.634892 1128 log.go:172] (0xc0006f2630) (0xc000622aa0) Stream added, broadcasting: 1\nI0619 13:19:36.641038 1128 log.go:172] (0xc0006f2630) Reply frame received for 1\nI0619 13:19:36.641323 1128 log.go:172] (0xc0006f2630) (0xc000478000) Create stream\nI0619 13:19:36.641354 1128 log.go:172] (0xc0006f2630) (0xc000478000) Stream added, broadcasting: 3\nI0619 13:19:36.642385 1128 log.go:172] (0xc0006f2630) Reply frame received for 3\nI0619 13:19:36.642417 1128 log.go:172] (0xc0006f2630) (0xc000622320) Create stream\nI0619 13:19:36.642426 1128 log.go:172] (0xc0006f2630) (0xc000622320) Stream added, broadcasting: 5\nI0619 13:19:36.643363 1128 log.go:172] (0xc0006f2630) Reply frame received for 5\nI0619 13:19:36.731014 1128 log.go:172] (0xc0006f2630) Data frame received for 5\nI0619 13:19:36.731043 1128 log.go:172] (0xc000622320) (5) Data frame handling\nI0619 13:19:36.731056 1128 log.go:172] (0xc000622320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0619 13:19:36.739281 1128 log.go:172] (0xc0006f2630) Data frame received for 3\nI0619 13:19:36.739313 1128 log.go:172] (0xc000478000) (3) Data frame handling\nI0619 13:19:36.739330 1128 log.go:172] (0xc000478000) (3) Data frame sent\nI0619 13:19:36.739357 1128 log.go:172] (0xc0006f2630) Data frame received for 3\nI0619 13:19:36.739373 1128 log.go:172] (0xc000478000) (3) Data frame handling\nI0619 13:19:36.739395 1128 log.go:172] (0xc0006f2630) Data frame received for 5\nI0619 13:19:36.739405 1128 log.go:172] (0xc000622320) (5) Data frame handling\nI0619 13:19:36.739415 1128 log.go:172] (0xc000622320) (5) Data frame sent\nI0619 13:19:36.739427 1128 log.go:172] (0xc0006f2630) Data frame received for 5\nI0619 13:19:36.739447 1128 log.go:172] (0xc000622320) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0619 13:19:36.739471 1128 log.go:172] (0xc000622320) (5) Data frame sent\nI0619 13:19:36.739491 1128 log.go:172] (0xc0006f2630) Data frame received for 5\nI0619 13:19:36.739502 1128 log.go:172] (0xc000622320) (5) Data frame handling\nI0619 13:19:36.741677 1128 log.go:172] (0xc0006f2630) Data frame received for 1\nI0619 13:19:36.741698 1128 log.go:172] (0xc000622aa0) (1) Data frame handling\nI0619 13:19:36.741711 1128 log.go:172] (0xc000622aa0) (1) Data frame sent\nI0619 13:19:36.741724 1128 log.go:172] (0xc0006f2630) (0xc000622aa0) Stream removed, broadcasting: 1\nI0619 13:19:36.741737 1128 log.go:172] (0xc0006f2630) Go away received\nI0619 13:19:36.742148 1128 log.go:172] (0xc0006f2630) (0xc000622aa0) Stream removed, broadcasting: 1\nI0619 13:19:36.742173 1128 log.go:172] (0xc0006f2630) (0xc000478000) Stream removed, broadcasting: 3\nI0619 13:19:36.742182 1128 log.go:172] (0xc0006f2630) (0xc000622320) Stream removed, broadcasting: 5\n" Jun 19 13:19:36.747: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 19 13:19:36.747: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 19 13:19:36.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:19:36.932: INFO: stderr: "I0619 13:19:36.861481 1149 log.go:172] (0xc000116dc0) (0xc0002ce820) Create stream\nI0619 13:19:36.861544 1149 log.go:172] (0xc000116dc0) (0xc0002ce820) Stream added, broadcasting: 1\nI0619 13:19:36.863903 1149 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0619 13:19:36.863945 1149 log.go:172] (0xc000116dc0) (0xc000a14000) Create stream\nI0619 13:19:36.863960 1149 log.go:172] (0xc000116dc0) (0xc000a14000) Stream added, broadcasting: 3\nI0619 13:19:36.864599 1149 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0619 13:19:36.864619 1149 log.go:172] (0xc000116dc0) (0xc0002ce8c0) Create stream\nI0619 13:19:36.864625 1149 log.go:172] (0xc000116dc0) (0xc0002ce8c0) Stream added, broadcasting: 5\nI0619 13:19:36.865766 1149 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0619 13:19:36.924044 1149 log.go:172] (0xc000116dc0) Data frame received for 3\nI0619 13:19:36.924152 1149 log.go:172] (0xc000a14000) (3) Data frame handling\nI0619 13:19:36.924168 1149 log.go:172] (0xc000a14000) (3) Data frame sent\nI0619 13:19:36.924176 1149 log.go:172] (0xc000116dc0) Data frame received for 3\nI0619 13:19:36.924182 1149 log.go:172] (0xc000a14000) (3) Data frame handling\nI0619 13:19:36.924207 1149 log.go:172] (0xc000116dc0) Data frame received for 5\nI0619 13:19:36.924213 1149 log.go:172] (0xc0002ce8c0) (5) Data frame handling\nI0619 13:19:36.924227 1149 log.go:172] (0xc0002ce8c0) (5) Data frame sent\nI0619 13:19:36.924239 1149 log.go:172] (0xc000116dc0) Data frame received for 5\nI0619 13:19:36.924270 1149 log.go:172] (0xc0002ce8c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0619 13:19:36.926298 1149 log.go:172] (0xc000116dc0) Data frame received for 1\nI0619 13:19:36.926331 1149 log.go:172] (0xc0002ce820) (1) Data frame handling\nI0619 13:19:36.926343 1149 log.go:172] (0xc0002ce820) (1) Data frame sent\nI0619 13:19:36.926356 1149 log.go:172] (0xc000116dc0) (0xc0002ce820) Stream removed, broadcasting: 1\nI0619 13:19:36.926368 1149 log.go:172] (0xc000116dc0) Go away received\nI0619 13:19:36.926835 1149 log.go:172] (0xc000116dc0) (0xc0002ce820) Stream removed, broadcasting: 1\nI0619 13:19:36.926859 1149 log.go:172] (0xc000116dc0) (0xc000a14000) Stream removed, broadcasting: 3\nI0619 13:19:36.926869 1149 log.go:172] (0xc000116dc0) (0xc0002ce8c0) Stream removed, broadcasting: 5\n" Jun 19 13:19:36.932: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 19 13:19:36.932: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 19 13:19:36.950: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jun 19 13:19:46.954: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 19 13:19:46.954: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 19 13:19:46.954: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jun 19 13:19:46.958: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 19 13:19:47.187: INFO: stderr: "I0619 13:19:47.086723 1170 log.go:172] (0xc000a14420) (0xc000666aa0) Create stream\nI0619 13:19:47.086781 1170 log.go:172] (0xc000a14420) (0xc000666aa0) Stream added, broadcasting: 1\nI0619 13:19:47.091480 1170 log.go:172] (0xc000a14420) Reply frame received for 1\nI0619 13:19:47.091522 1170 log.go:172] (0xc000a14420) (0xc000666280) Create stream\nI0619 13:19:47.091531 1170 log.go:172] (0xc000a14420) (0xc000666280) Stream added, broadcasting: 3\nI0619 13:19:47.092435 1170 log.go:172] (0xc000a14420) Reply frame received for 3\nI0619 13:19:47.092480 1170 log.go:172] (0xc000a14420) (0xc00002e000) Create stream\nI0619 13:19:47.092507 1170 log.go:172] (0xc000a14420) (0xc00002e000) Stream added, broadcasting: 5\nI0619 13:19:47.093612 1170 log.go:172] (0xc000a14420) Reply frame received for 5\nI0619 13:19:47.178060 1170 log.go:172] (0xc000a14420) Data frame received for 3\nI0619 13:19:47.178092 1170 log.go:172] (0xc000666280) (3) Data frame handling\nI0619 13:19:47.178106 1170 log.go:172] (0xc000666280) (3) Data frame sent\nI0619 13:19:47.178166 1170 log.go:172] (0xc000a14420) Data frame received for 5\nI0619 13:19:47.178219 1170 log.go:172] (0xc00002e000) (5) Data frame handling\nI0619 13:19:47.178250 1170 log.go:172] (0xc00002e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0619 13:19:47.178274 1170 log.go:172] (0xc000a14420) Data frame received for 5\nI0619 13:19:47.178317 1170 log.go:172] (0xc00002e000) (5) Data frame handling\nI0619 13:19:47.178484 1170 log.go:172] (0xc000a14420) Data frame received for 3\nI0619 13:19:47.178503 1170 log.go:172] (0xc000666280) (3) Data frame handling\nI0619 13:19:47.180188 1170 log.go:172] (0xc000a14420) Data frame received for 1\nI0619 13:19:47.180207 1170 log.go:172] (0xc000666aa0) (1) Data frame handling\nI0619 13:19:47.180220 1170 log.go:172] (0xc000666aa0) (1) Data frame sent\nI0619 13:19:47.180240 1170 log.go:172] (0xc000a14420) (0xc000666aa0) Stream removed, broadcasting: 1\nI0619 13:19:47.180631 1170 log.go:172] (0xc000a14420) Go away received\nI0619 13:19:47.180784 1170 log.go:172] (0xc000a14420) (0xc000666aa0) Stream removed, broadcasting: 1\nI0619 13:19:47.180821 1170 log.go:172] (0xc000a14420) (0xc000666280) Stream removed, broadcasting: 3\nI0619 13:19:47.180843 1170 log.go:172] (0xc000a14420) (0xc00002e000) Stream removed, broadcasting: 5\n" Jun 19 13:19:47.187: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 19 13:19:47.187: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 19 13:19:47.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 19 13:19:47.601: INFO: stderr: "I0619 13:19:47.315147 1191 log.go:172] (0xc000a224d0) (0xc000668a00) Create stream\nI0619 13:19:47.315228 1191 log.go:172] (0xc000a224d0) (0xc000668a00) Stream added, broadcasting: 1\nI0619 13:19:47.318815 1191 log.go:172] (0xc000a224d0) Reply frame received for 1\nI0619 13:19:47.318877 1191 log.go:172] (0xc000a224d0) (0xc0006681e0) Create stream\nI0619 13:19:47.318912 1191 log.go:172] (0xc000a224d0) (0xc0006681e0) Stream added, broadcasting: 3\nI0619 13:19:47.319956 1191 log.go:172] (0xc000a224d0) Reply frame received for 3\nI0619 13:19:47.320038 1191 log.go:172] (0xc000a224d0) (0xc000186000) Create stream\nI0619 13:19:47.320061 1191 log.go:172] (0xc000a224d0) (0xc000186000) Stream added, broadcasting: 5\nI0619 13:19:47.320921 1191 log.go:172] (0xc000a224d0) Reply frame received for 5\nI0619 13:19:47.554754 1191 log.go:172] (0xc000a224d0) Data frame received for 5\nI0619 13:19:47.554775 1191 log.go:172] (0xc000186000) (5) Data frame handling\nI0619 13:19:47.554792 1191 log.go:172] (0xc000186000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0619 13:19:47.594179 1191 log.go:172] (0xc000a224d0) Data frame received for 3\nI0619 13:19:47.594220 1191 log.go:172] (0xc0006681e0) (3) Data frame handling\nI0619 13:19:47.594244 1191 log.go:172] (0xc0006681e0) (3) Data frame sent\nI0619 13:19:47.594257 1191 log.go:172] (0xc000a224d0) Data frame received for 3\nI0619 13:19:47.594269 1191 log.go:172] (0xc0006681e0) (3) Data frame handling\nI0619 13:19:47.594554 1191 log.go:172] (0xc000a224d0) Data frame received for 5\nI0619 13:19:47.594591 1191 log.go:172] (0xc000186000) (5) Data frame handling\nI0619 13:19:47.595788 1191 log.go:172] (0xc000a224d0) Data frame received for 1\nI0619 13:19:47.595805 1191 log.go:172] (0xc000668a00) (1) Data frame handling\nI0619 13:19:47.595822 1191 log.go:172] (0xc000668a00) (1) Data frame sent\nI0619 13:19:47.595871 1191 log.go:172] (0xc000a224d0) (0xc000668a00) Stream removed, broadcasting: 1\nI0619 13:19:47.595957 1191 log.go:172] (0xc000a224d0) Go away received\nI0619 13:19:47.596178 1191 log.go:172] (0xc000a224d0) (0xc000668a00) Stream removed, broadcasting: 1\nI0619 13:19:47.596191 1191 log.go:172] (0xc000a224d0) (0xc0006681e0) Stream removed, broadcasting: 3\nI0619 13:19:47.596199 1191 log.go:172] (0xc000a224d0) (0xc000186000) Stream removed, broadcasting: 5\n" Jun 19 13:19:47.602: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 19 13:19:47.602: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 19 13:19:47.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 19 13:19:47.842: INFO: stderr: "I0619 13:19:47.731442 1213 log.go:172] (0xc000830420) (0xc00038e6e0) Create stream\nI0619 13:19:47.731495 1213 log.go:172] (0xc000830420) (0xc00038e6e0) Stream added, broadcasting: 1\nI0619 13:19:47.733898 1213 log.go:172] (0xc000830420) Reply frame received for 1\nI0619 13:19:47.733954 1213 log.go:172] (0xc000830420) (0xc00095e000) Create stream\nI0619 13:19:47.733970 1213 log.go:172] (0xc000830420) (0xc00095e000) Stream added, broadcasting: 3\nI0619 13:19:47.734976 1213 log.go:172] (0xc000830420) Reply frame received for 3\nI0619 13:19:47.735015 1213 log.go:172] (0xc000830420) (0xc00038e780) Create stream\nI0619 13:19:47.735030 1213 log.go:172] (0xc000830420) (0xc00038e780) Stream added, broadcasting: 5\nI0619 13:19:47.735867 1213 log.go:172] (0xc000830420) Reply frame received for 5\nI0619 13:19:47.796597 1213 log.go:172] (0xc000830420) Data frame received for 5\nI0619 13:19:47.796628 1213 log.go:172] (0xc00038e780) (5) Data frame handling\nI0619 13:19:47.796648 1213 log.go:172] (0xc00038e780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0619 13:19:47.832479 1213 log.go:172] (0xc000830420) Data frame received for 3\nI0619 13:19:47.832508 1213 log.go:172] (0xc00095e000) (3) Data frame handling\nI0619 13:19:47.832520 1213 log.go:172] (0xc00095e000) (3) Data frame sent\nI0619 13:19:47.832533 1213 log.go:172] (0xc000830420) Data frame received for 3\nI0619 13:19:47.832539 1213 log.go:172] (0xc00095e000) (3) Data frame handling\nI0619 13:19:47.832561 1213 log.go:172] (0xc000830420) Data frame received for 5\nI0619 13:19:47.832598 1213 log.go:172] (0xc00038e780) (5) Data frame handling\nI0619 13:19:47.834462 1213 log.go:172] (0xc000830420) Data frame received for 1\nI0619 13:19:47.834491 1213 log.go:172] (0xc00038e6e0) (1) Data frame handling\nI0619 13:19:47.834519 1213 log.go:172] (0xc00038e6e0) (1) Data frame sent\nI0619 13:19:47.834545 1213 log.go:172] (0xc000830420) (0xc00038e6e0) Stream removed, broadcasting: 1\nI0619 13:19:47.834571 1213 log.go:172] (0xc000830420) Go away received\nI0619 13:19:47.835785 1213 log.go:172] (0xc000830420) (0xc00038e6e0) Stream removed, broadcasting: 1\nI0619 13:19:47.835820 1213 log.go:172] (0xc000830420) (0xc00095e000) Stream removed, broadcasting: 3\nI0619 13:19:47.835834 1213 log.go:172] (0xc000830420) (0xc00038e780) Stream removed, broadcasting: 5\n" Jun 19 13:19:47.842: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 19 13:19:47.842: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 19 13:19:47.842: INFO: Waiting for statefulset status.replicas updated to 0 Jun 19 13:19:47.846: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 19 13:19:57.855: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 19 13:19:57.855: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 19 13:19:57.855: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 19 13:19:57.867: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 13:19:57.867: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC }] Jun 19 13:19:57.867: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:19:57.867: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:19:57.867: INFO: Jun 19 13:19:57.867: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 19 13:19:58.981: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 13:19:58.981: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC }] Jun 19 13:19:58.981: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:19:58.981: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:19:58.981: INFO: Jun 19 13:19:58.981: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 19 13:19:59.986: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 13:19:59.986: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC }] Jun 19 13:19:59.986: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:19:59.986: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:19:59.986: INFO: Jun 19 13:19:59.986: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 19 13:20:00.991: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 13:20:00.991: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC }] Jun 19 13:20:00.991: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:20:00.991: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:20:00.991: INFO: Jun 19 13:20:00.991: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 19 13:20:01.996: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 13:20:01.996: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC }] Jun 19 13:20:01.996: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:20:01.996: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:20:01.996: INFO: Jun 19 13:20:01.996: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 19 13:20:03.001: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 13:20:03.001: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC }] Jun 19 13:20:03.002: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:20:03.002: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:20:03.002: INFO: Jun 19 13:20:03.002: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 19 13:20:04.007: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 13:20:04.007: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC }] Jun 19 13:20:04.007: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:20:04.007: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:20:04.007: INFO: Jun 19 13:20:04.007: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 19 13:20:05.012: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 13:20:05.012: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC }] Jun 19 13:20:05.012: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:20:05.012: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:20:05.012: INFO: Jun 19 13:20:05.012: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 19 13:20:06.018: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 13:20:06.018: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC }] Jun 19 13:20:06.018: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:20:06.018: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:20:06.018: INFO: Jun 19 13:20:06.018: INFO: StatefulSet ss has not reached scale 0, at 3 Jun 19 13:20:07.031: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 13:20:07.031: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:05 +0000 UTC }] Jun 19 13:20:07.031: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:20:07.031: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:48 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:19:26 +0000 UTC }] Jun 19 13:20:07.031: INFO: Jun 19 13:20:07.031: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7549 Jun 19 13:20:08.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:20:08.176: INFO: rc: 1 Jun 19 13:20:08.176: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc002f990b0 exit status 1 true [0xc000010178 0xc000010330 0xc0000104c0] [0xc000010178 0xc000010330 0xc0000104c0] [0xc000010270 0xc000010488] [0xba70e0 0xba70e0] 0xc002c46840 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jun 19 13:20:18.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:20:18.269: INFO: rc: 1 Jun 19 13:20:18.269: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00104a0c0 exit status 1 true [0xc000762b30 0xc000762ce0 0xc000762f48] [0xc000762b30 0xc000762ce0 0xc000762f48] [0xc000762cb0 0xc000762e90] [0xba70e0 0xba70e0] 0xc0022d8540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:20:28.269: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:20:28.365: INFO: rc: 1 Jun 19 13:20:28.365: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00104a180 exit status 1 true [0xc000762f58 0xc000763030 0xc000763188] [0xc000762f58 0xc000763030 0xc000763188] [0xc000762fd0 0xc0007630e0] [0xba70e0 0xba70e0] 0xc0022d88a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:20:38.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:20:38.468: INFO: rc: 1 Jun 19 13:20:38.468: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00104a240 exit status 1 true [0xc000763238 0xc000763310 0xc000763358] [0xc000763238 0xc000763310 0xc000763358] [0xc000763300 0xc000763338] [0xba70e0 0xba70e0] 0xc0022d8f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:20:48.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:20:48.584: INFO: rc: 1 Jun 19 13:20:48.584: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f991a0 exit status 1 true [0xc0000105a0 0xc000010798 0xc000010890] [0xc0000105a0 0xc000010798 0xc000010890] [0xc000010738 0xc000010828] [0xba70e0 0xba70e0] 0xc002c46b40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:20:58.585: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:20:58.694: INFO: rc: 1 Jun 19 13:20:58.695: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0034ee0f0 exit status 1 true [0xc00077e120 0xc00077e4a8 0xc00077e6f0] [0xc00077e120 0xc00077e4a8 0xc00077e6f0] [0xc00077e340 0xc00077e690] [0xba70e0 0xba70e0] 0xc002398900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:21:08.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:21:08.799: INFO: rc: 1 Jun 19 13:21:08.800: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00283e0f0 exit status 1 true [0xc0005d2018 0xc0000ea110 0xc0000ea7f8] [0xc0005d2018 0xc0000ea110 0xc0000ea7f8] [0xc0003780e8 0xc0000ea7b0] [0xba70e0 0xba70e0] 0xc001a22660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:21:18.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:21:18.910: INFO: rc: 1 Jun 19 13:21:18.910: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00283e1b0 exit status 1 true [0xc0000ea820 0xc0000ea980 0xc0000eab00] [0xc0000ea820 0xc0000ea980 0xc0000eab00] [0xc0000ea950 0xc0000eaa80] [0xba70e0 0xba70e0] 0xc001a22960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:21:28.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:21:29.017: INFO: rc: 1 Jun 19 13:21:29.017: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00283e270 exit status 1 true [0xc0000eab40 0xc0000eada8 0xc0000eae60] [0xc0000eab40 0xc0000eada8 0xc0000eae60] [0xc0000ead58 0xc0000eadf8] [0xba70e0 0xba70e0] 0xc001a23aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:21:39.018: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:21:39.129: INFO: rc: 1 Jun 19 13:21:39.129: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f992c0 exit status 1 true [0xc0000108d0 0xc000010940 0xc0000109f0] [0xc0000108d0 0xc000010940 0xc0000109f0] [0xc000010910 0xc0000109c0] [0xba70e0 0xba70e0] 0xc002c46e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:21:49.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:21:49.225: INFO: rc: 1 Jun 19 13:21:49.225: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f993b0 exit status 1 true [0xc000010a70 0xc000010ae8 0xc000010ba0] [0xc000010a70 0xc000010ae8 0xc000010ba0] [0xc000010ac8 0xc000010b08] [0xba70e0 0xba70e0] 0xc002c47140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:21:59.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:21:59.327: INFO: rc: 1 Jun 19 13:21:59.327: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00283e330 exit status 1 true [0xc0000eae90 0xc0000eb058 0xc0000eb2e0] [0xc0000eae90 0xc0000eb058 0xc0000eb2e0] [0xc0000eaef8 0xc0000eb168] [0xba70e0 0xba70e0] 0xc001a23da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:22:09.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:22:09.417: INFO: rc: 1 Jun 19 13:22:09.418: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00104a090 exit status 1 true [0xc0003780e8 0xc000762bd8 0xc000762d60] [0xc0003780e8 0xc000762bd8 0xc000762d60] [0xc000762b30 0xc000762ce0] [0xba70e0 0xba70e0] 0xc0022d84e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:22:19.418: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:22:19.513: INFO: rc: 1 Jun 19 13:22:19.513: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0034ee090 exit status 1 true [0xc0000ea110 0xc0000ea7f8 0xc0000ea950] [0xc0000ea110 0xc0000ea7f8 0xc0000ea950] [0xc0000ea7b0 0xc0000ea908] [0xba70e0 0xba70e0] 0xc001a22660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:22:29.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:22:29.617: INFO: rc: 1 Jun 19 13:22:29.617: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0034ee180 exit status 1 true [0xc0000ea980 0xc0000eab00 0xc0000ead58] [0xc0000ea980 0xc0000eab00 0xc0000ead58] [0xc0000eaa80 0xc0000eac18] [0xba70e0 0xba70e0] 0xc001a22960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:22:39.617: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:22:39.708: INFO: rc: 1 Jun 19 13:22:39.708: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00283e0c0 exit status 1 true [0xc00077e120 0xc00077e4a8 0xc00077e6f0] [0xc00077e120 0xc00077e4a8 0xc00077e6f0] [0xc00077e340 0xc00077e690] [0xba70e0 0xba70e0] 0xc002398900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:22:49.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:22:49.807: INFO: rc: 1 Jun 19 13:22:49.807: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0034ee240 exit status 1 true [0xc0000eada8 0xc0000eae60 0xc0000eaef8] [0xc0000eada8 0xc0000eae60 0xc0000eaef8] [0xc0000eadf8 0xc0000eaec0] [0xba70e0 0xba70e0] 0xc001a23aa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:22:59.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:22:59.904: INFO: rc: 1 Jun 19 13:22:59.904: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f980c0 exit status 1 true [0xc000010010 0xc0000100e0 0xc000010198] [0xc000010010 0xc0000100e0 0xc000010198] [0xc000010070 0xc000010178] [0xba70e0 0xba70e0] 0xc002c46600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:23:09.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:23:10.000: INFO: rc: 1 Jun 19 13:23:10.000: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0034ee330 exit status 1 true [0xc0000eb058 0xc0000eb2e0 0xc0000eb450] [0xc0000eb058 0xc0000eb2e0 0xc0000eb450] [0xc0000eb168 0xc0000eb378] [0xba70e0 0xba70e0] 0xc001a23da0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:23:20.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:23:20.102: INFO: rc: 1 Jun 19 13:23:20.102: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00104a270 exit status 1 true [0xc000762e90 0xc000762f88 0xc000763098] [0xc000762e90 0xc000762f88 0xc000763098] [0xc000762f58 0xc000763030] [0xba70e0 0xba70e0] 0xc0022d8840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:23:30.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:23:30.210: INFO: rc: 1 Jun 19 13:23:30.211: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0034ee3f0 exit status 1 true [0xc0000eb4a8 0xc0000eb5e0 0xc0000eb7f8] [0xc0000eb4a8 0xc0000eb5e0 0xc0000eb7f8] [0xc0000eb550 0xc0000eb790] [0xba70e0 0xba70e0] 0xc002c96120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:23:40.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:23:40.313: INFO: rc: 1 Jun 19 13:23:40.314: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00283e240 exit status 1 true [0xc00077e748 0xc00077ebe8 0xc00077eee0] [0xc00077e748 0xc00077ebe8 0xc00077eee0] [0xc00077ea38 0xc00077ed68] [0xba70e0 0xba70e0] 0xc0023991a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:23:50.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:23:50.420: INFO: rc: 1 Jun 19 13:23:50.420: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00283e360 exit status 1 true [0xc00077f008 0xc00077f440 0xc00077f5b8] [0xc00077f008 0xc00077f440 0xc00077f5b8] [0xc00077f258 0xc00077f4e0] [0xba70e0 0xba70e0] 0xc002fa4060 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:24:00.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:24:00.526: INFO: rc: 1 Jun 19 13:24:00.526: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0034ee4b0 exit status 1 true [0xc0000eb820 0xc0000eb870 0xc0000eb970] [0xc0000eb820 0xc0000eb870 0xc0000eb970] [0xc0000eb860 0xc0000eb940] [0xba70e0 0xba70e0] 0xc002c96420 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:24:10.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:24:10.628: INFO: rc: 1 Jun 19 13:24:10.628: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0034ee600 exit status 1 true [0xc0000eb9c0 0xc0000ebb00 0xc0000ebc50] [0xc0000eb9c0 0xc0000ebb00 0xc0000ebc50] [0xc0000eba90 0xc0000ebbe0] [0xba70e0 0xba70e0] 0xc002c96720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:24:20.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:24:20.727: INFO: rc: 1 Jun 19 13:24:20.727: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0034ee0c0 exit status 1 true [0xc0005d2018 0xc00077e120 0xc00077e4a8] [0xc0005d2018 0xc00077e120 0xc00077e4a8] [0xc0003780e8 0xc00077e340] [0xba70e0 0xba70e0] 0xc002398900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:24:30.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:24:30.829: INFO: rc: 1 Jun 19 13:24:30.829: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00104a0c0 exit status 1 true [0xc0000ea110 0xc0000ea7f8 0xc0000ea950] [0xc0000ea110 0xc0000ea7f8 0xc0000ea950] [0xc0000ea7b0 0xc0000ea908] [0xba70e0 0xba70e0] 0xc001a22660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:24:40.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:24:40.932: INFO: rc: 1 Jun 19 13:24:40.932: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00283e150 exit status 1 true [0xc000762b30 0xc000762ce0 0xc000762f48] [0xc000762b30 0xc000762ce0 0xc000762f48] [0xc000762cb0 0xc000762e90] [0xba70e0 0xba70e0] 0xc002fa4240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:24:50.932: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:24:51.043: INFO: rc: 1 Jun 19 13:24:51.043: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f98090 exit status 1 true [0xc000010010 0xc0000100e0 0xc000010198] [0xc000010010 0xc0000100e0 0xc000010198] [0xc000010070 0xc000010178] [0xba70e0 0xba70e0] 0xc002c962a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:25:01.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:25:01.155: INFO: rc: 1 Jun 19 13:25:01.155: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002f98180 exit status 1 true [0xc000010270 0xc000010488 0xc000010658] [0xc000010270 0xc000010488 0xc000010658] [0xc000010400 0xc0000105a0] [0xba70e0 0xba70e0] 0xc002c968a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jun 19 13:25:11.156: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7549 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 13:25:11.254: INFO: rc: 1 Jun 19 13:25:11.255: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Jun 19 13:25:11.255: INFO: Scaling statefulset ss to 0 Jun 19 13:25:11.261: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 19 13:25:11.262: INFO: Deleting all statefulset in ns statefulset-7549 Jun 19 13:25:11.264: INFO: Scaling statefulset ss to 0 Jun 19 13:25:11.269: INFO: Waiting for statefulset status.replicas updated to 0 Jun 19 13:25:11.270: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:25:11.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7549" for this suite. Jun 19 13:25:17.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:25:17.455: INFO: namespace statefulset-7549 deletion completed in 6.160000344s • [SLOW TEST:371.831 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:25:17.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Jun 19 13:25:17.537: INFO: Waiting up to 5m0s for pod "pod-b9f69c86-3007-4c95-a567-55733bce55e5" in namespace "emptydir-5057" to be "success or failure" Jun 19 13:25:17.566: INFO: Pod "pod-b9f69c86-3007-4c95-a567-55733bce55e5": Phase="Pending", Reason="", readiness=false. Elapsed: 28.786558ms Jun 19 13:25:19.570: INFO: Pod "pod-b9f69c86-3007-4c95-a567-55733bce55e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032697323s Jun 19 13:25:21.631: INFO: Pod "pod-b9f69c86-3007-4c95-a567-55733bce55e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093602161s Jun 19 13:25:23.635: INFO: Pod "pod-b9f69c86-3007-4c95-a567-55733bce55e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09747028s Jun 19 13:25:25.639: INFO: Pod "pod-b9f69c86-3007-4c95-a567-55733bce55e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101345553s STEP: Saw pod success Jun 19 13:25:25.639: INFO: Pod "pod-b9f69c86-3007-4c95-a567-55733bce55e5" satisfied condition "success or failure" Jun 19 13:25:25.642: INFO: Trying to get logs from node iruya-worker pod pod-b9f69c86-3007-4c95-a567-55733bce55e5 container test-container: STEP: delete the pod Jun 19 13:25:25.695: INFO: Waiting for pod pod-b9f69c86-3007-4c95-a567-55733bce55e5 to disappear Jun 19 13:25:25.718: INFO: Pod pod-b9f69c86-3007-4c95-a567-55733bce55e5 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:25:25.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5057" for this suite. Jun 19 13:25:31.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:25:31.798: INFO: namespace emptydir-5057 deletion completed in 6.077418591s • [SLOW TEST:14.343 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:25:31.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-84c4c8a2-2054-40a7-9cba-4436270ae26c STEP: Creating a pod to test consume configMaps Jun 19 13:25:31.962: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-156e9960-f9f7-4c4d-9deb-2bc53ab318a6" in namespace "projected-9624" to be "success or failure" Jun 19 13:25:31.965: INFO: Pod "pod-projected-configmaps-156e9960-f9f7-4c4d-9deb-2bc53ab318a6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.674077ms Jun 19 13:25:33.969: INFO: Pod "pod-projected-configmaps-156e9960-f9f7-4c4d-9deb-2bc53ab318a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007303346s Jun 19 13:25:35.985: INFO: Pod "pod-projected-configmaps-156e9960-f9f7-4c4d-9deb-2bc53ab318a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022932224s Jun 19 13:25:37.989: INFO: Pod "pod-projected-configmaps-156e9960-f9f7-4c4d-9deb-2bc53ab318a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.027587903s STEP: Saw pod success Jun 19 13:25:37.989: INFO: Pod "pod-projected-configmaps-156e9960-f9f7-4c4d-9deb-2bc53ab318a6" satisfied condition "success or failure" Jun 19 13:25:37.992: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-156e9960-f9f7-4c4d-9deb-2bc53ab318a6 container projected-configmap-volume-test: STEP: delete the pod Jun 19 13:25:38.024: INFO: Waiting for pod pod-projected-configmaps-156e9960-f9f7-4c4d-9deb-2bc53ab318a6 to disappear Jun 19 13:25:38.038: INFO: Pod pod-projected-configmaps-156e9960-f9f7-4c4d-9deb-2bc53ab318a6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:25:38.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9624" for this suite. Jun 19 13:25:44.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:25:44.130: INFO: namespace projected-9624 deletion completed in 6.088899853s • [SLOW TEST:12.332 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:25:44.131: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 19 13:25:52.285: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 19 13:25:52.351: INFO: Pod pod-with-prestop-http-hook still exists Jun 19 13:25:54.351: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 19 13:25:54.355: INFO: Pod pod-with-prestop-http-hook still exists Jun 19 13:25:56.351: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 19 13:25:56.355: INFO: Pod pod-with-prestop-http-hook still exists Jun 19 13:25:58.351: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 19 13:25:58.356: INFO: Pod pod-with-prestop-http-hook still exists Jun 19 13:26:00.351: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 19 13:26:00.355: INFO: Pod pod-with-prestop-http-hook still exists Jun 19 13:26:02.351: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Jun 19 13:26:02.355: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:26:02.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-732" for this suite. Jun 19 13:26:24.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:26:24.458: INFO: namespace container-lifecycle-hook-732 deletion completed in 22.092405205s • [SLOW TEST:40.328 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:26:24.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Jun 19 13:26:34.623: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4990 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 13:26:34.623: INFO: >>> kubeConfig: /root/.kube/config I0619 13:26:34.664020 6 log.go:172] (0xc000f604d0) (0xc0003ab4a0) Create stream I0619 13:26:34.664054 6 log.go:172] (0xc000f604d0) (0xc0003ab4a0) Stream added, broadcasting: 1 I0619 13:26:34.666326 6 log.go:172] (0xc000f604d0) Reply frame received for 1 I0619 13:26:34.666357 6 log.go:172] (0xc000f604d0) (0xc0012643c0) Create stream I0619 13:26:34.666491 6 log.go:172] (0xc000f604d0) (0xc0012643c0) Stream added, broadcasting: 3 I0619 13:26:34.667489 6 log.go:172] (0xc000f604d0) Reply frame received for 3 I0619 13:26:34.667522 6 log.go:172] (0xc000f604d0) (0xc0003ab5e0) Create stream I0619 13:26:34.667536 6 log.go:172] (0xc000f604d0) (0xc0003ab5e0) Stream added, broadcasting: 5 I0619 13:26:34.668717 6 log.go:172] (0xc000f604d0) Reply frame received for 5 I0619 13:26:34.756165 6 log.go:172] (0xc000f604d0) Data frame received for 5 I0619 13:26:34.756198 6 log.go:172] (0xc0003ab5e0) (5) Data frame handling I0619 13:26:34.756219 6 log.go:172] (0xc000f604d0) Data frame received for 3 I0619 13:26:34.756226 6 log.go:172] (0xc0012643c0) (3) Data frame handling I0619 13:26:34.756254 6 log.go:172] (0xc0012643c0) (3) Data frame sent I0619 13:26:34.756272 6 log.go:172] (0xc000f604d0) Data frame received for 3 I0619 13:26:34.756283 6 log.go:172] (0xc0012643c0) (3) Data frame handling I0619 13:26:34.757976 6 log.go:172] (0xc000f604d0) Data frame received for 1 I0619 13:26:34.758002 6 log.go:172] (0xc0003ab4a0) (1) Data frame handling I0619 13:26:34.758016 6 log.go:172] (0xc0003ab4a0) (1) Data frame sent I0619 13:26:34.758027 6 log.go:172] (0xc000f604d0) (0xc0003ab4a0) Stream removed, broadcasting: 1 I0619 13:26:34.758040 6 log.go:172] (0xc000f604d0) Go away received I0619 13:26:34.758246 6 log.go:172] (0xc000f604d0) (0xc0003ab4a0) Stream removed, broadcasting: 1 I0619 13:26:34.758278 6 log.go:172] (0xc000f604d0) (0xc0012643c0) Stream removed, broadcasting: 3 I0619 13:26:34.758368 6 log.go:172] (0xc000f604d0) (0xc0003ab5e0) Stream removed, broadcasting: 5 Jun 19 13:26:34.758: INFO: Exec stderr: "" Jun 19 13:26:34.758: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4990 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 13:26:34.758: INFO: >>> kubeConfig: /root/.kube/config I0619 13:26:34.785285 6 log.go:172] (0xc000f60f20) (0xc0003abcc0) Create stream I0619 13:26:34.785328 6 log.go:172] (0xc000f60f20) (0xc0003abcc0) Stream added, broadcasting: 1 I0619 13:26:34.787600 6 log.go:172] (0xc000f60f20) Reply frame received for 1 I0619 13:26:34.787653 6 log.go:172] (0xc000f60f20) (0xc001264640) Create stream I0619 13:26:34.787666 6 log.go:172] (0xc000f60f20) (0xc001264640) Stream added, broadcasting: 3 I0619 13:26:34.788539 6 log.go:172] (0xc000f60f20) Reply frame received for 3 I0619 13:26:34.788577 6 log.go:172] (0xc000f60f20) (0xc0012646e0) Create stream I0619 13:26:34.788593 6 log.go:172] (0xc000f60f20) (0xc0012646e0) Stream added, broadcasting: 5 I0619 13:26:34.789943 6 log.go:172] (0xc000f60f20) Reply frame received for 5 I0619 13:26:34.888617 6 log.go:172] (0xc000f60f20) Data frame received for 3 I0619 13:26:34.888649 6 log.go:172] (0xc001264640) (3) Data frame handling I0619 13:26:34.888658 6 log.go:172] (0xc001264640) (3) Data frame sent I0619 13:26:34.888671 6 log.go:172] (0xc000f60f20) Data frame received for 5 I0619 13:26:34.888680 6 log.go:172] (0xc0012646e0) (5) Data frame handling I0619 13:26:34.894218 6 log.go:172] (0xc000f60f20) Data frame received for 1 I0619 13:26:34.894241 6 log.go:172] (0xc0003abcc0) (1) Data frame handling I0619 13:26:34.894254 6 log.go:172] (0xc0003abcc0) (1) Data frame sent I0619 13:26:34.894269 6 log.go:172] (0xc000f60f20) (0xc0003abcc0) Stream removed, broadcasting: 1 I0619 13:26:34.894283 6 log.go:172] (0xc000f60f20) Data frame received for 3 I0619 13:26:34.894293 6 log.go:172] (0xc001264640) (3) Data frame handling I0619 13:26:34.894306 6 log.go:172] (0xc000f60f20) Go away received I0619 13:26:34.894412 6 log.go:172] (0xc000f60f20) (0xc0003abcc0) Stream removed, broadcasting: 1 I0619 13:26:34.894425 6 log.go:172] (0xc000f60f20) (0xc001264640) Stream removed, broadcasting: 3 I0619 13:26:34.894430 6 log.go:172] (0xc000f60f20) (0xc0012646e0) Stream removed, broadcasting: 5 Jun 19 13:26:34.894: INFO: Exec stderr: "" Jun 19 13:26:34.894: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4990 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 13:26:34.894: INFO: >>> kubeConfig: /root/.kube/config I0619 13:26:34.916763 6 log.go:172] (0xc000f61760) (0xc002594140) Create stream I0619 13:26:34.916791 6 log.go:172] (0xc000f61760) (0xc002594140) Stream added, broadcasting: 1 I0619 13:26:34.919383 6 log.go:172] (0xc000f61760) Reply frame received for 1 I0619 13:26:34.919423 6 log.go:172] (0xc000f61760) (0xc0025941e0) Create stream I0619 13:26:34.919434 6 log.go:172] (0xc000f61760) (0xc0025941e0) Stream added, broadcasting: 3 I0619 13:26:34.920177 6 log.go:172] (0xc000f61760) Reply frame received for 3 I0619 13:26:34.920212 6 log.go:172] (0xc000f61760) (0xc0005b5400) Create stream I0619 13:26:34.920223 6 log.go:172] (0xc000f61760) (0xc0005b5400) Stream added, broadcasting: 5 I0619 13:26:34.920861 6 log.go:172] (0xc000f61760) Reply frame received for 5 I0619 13:26:34.983046 6 log.go:172] (0xc000f61760) Data frame received for 3 I0619 13:26:34.983081 6 log.go:172] (0xc0025941e0) (3) Data frame handling I0619 13:26:34.983091 6 log.go:172] (0xc0025941e0) (3) Data frame sent I0619 13:26:34.983097 6 log.go:172] (0xc000f61760) Data frame received for 3 I0619 13:26:34.983108 6 log.go:172] (0xc0025941e0) (3) Data frame handling I0619 13:26:34.983136 6 log.go:172] (0xc000f61760) Data frame received for 5 I0619 13:26:34.983145 6 log.go:172] (0xc0005b5400) (5) Data frame handling I0619 13:26:34.984223 6 log.go:172] (0xc000f61760) Data frame received for 1 I0619 13:26:34.984244 6 log.go:172] (0xc002594140) (1) Data frame handling I0619 13:26:34.984261 6 log.go:172] (0xc002594140) (1) Data frame sent I0619 13:26:34.984274 6 log.go:172] (0xc000f61760) (0xc002594140) Stream removed, broadcasting: 1 I0619 13:26:34.984295 6 log.go:172] (0xc000f61760) Go away received I0619 13:26:34.984422 6 log.go:172] (0xc000f61760) (0xc002594140) Stream removed, broadcasting: 1 I0619 13:26:34.984451 6 log.go:172] (0xc000f61760) (0xc0025941e0) Stream removed, broadcasting: 3 I0619 13:26:34.984472 6 log.go:172] (0xc000f61760) (0xc0005b5400) Stream removed, broadcasting: 5 Jun 19 13:26:34.984: INFO: Exec stderr: "" Jun 19 13:26:34.984: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4990 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 13:26:34.984: INFO: >>> kubeConfig: /root/.kube/config I0619 13:26:35.015665 6 log.go:172] (0xc001f40bb0) (0xc0005b5c20) Create stream I0619 13:26:35.015698 6 log.go:172] (0xc001f40bb0) (0xc0005b5c20) Stream added, broadcasting: 1 I0619 13:26:35.022824 6 log.go:172] (0xc001f40bb0) Reply frame received for 1 I0619 13:26:35.022875 6 log.go:172] (0xc001f40bb0) (0xc001264780) Create stream I0619 13:26:35.022893 6 log.go:172] (0xc001f40bb0) (0xc001264780) Stream added, broadcasting: 3 I0619 13:26:35.024545 6 log.go:172] (0xc001f40bb0) Reply frame received for 3 I0619 13:26:35.024577 6 log.go:172] (0xc001f40bb0) (0xc0005b5ea0) Create stream I0619 13:26:35.024603 6 log.go:172] (0xc001f40bb0) (0xc0005b5ea0) Stream added, broadcasting: 5 I0619 13:26:35.026066 6 log.go:172] (0xc001f40bb0) Reply frame received for 5 I0619 13:26:35.106364 6 log.go:172] (0xc001f40bb0) Data frame received for 5 I0619 13:26:35.106390 6 log.go:172] (0xc0005b5ea0) (5) Data frame handling I0619 13:26:35.106412 6 log.go:172] (0xc001f40bb0) Data frame received for 3 I0619 13:26:35.106427 6 log.go:172] (0xc001264780) (3) Data frame handling I0619 13:26:35.106440 6 log.go:172] (0xc001264780) (3) Data frame sent I0619 13:26:35.106450 6 log.go:172] (0xc001f40bb0) Data frame received for 3 I0619 13:26:35.106458 6 log.go:172] (0xc001264780) (3) Data frame handling I0619 13:26:35.107875 6 log.go:172] (0xc001f40bb0) Data frame received for 1 I0619 13:26:35.107893 6 log.go:172] (0xc0005b5c20) (1) Data frame handling I0619 13:26:35.107901 6 log.go:172] (0xc0005b5c20) (1) Data frame sent I0619 13:26:35.107918 6 log.go:172] (0xc001f40bb0) (0xc0005b5c20) Stream removed, broadcasting: 1 I0619 13:26:35.107984 6 log.go:172] (0xc001f40bb0) Go away received I0619 13:26:35.108030 6 log.go:172] (0xc001f40bb0) (0xc0005b5c20) Stream removed, broadcasting: 1 I0619 13:26:35.108073 6 log.go:172] (0xc001f40bb0) (0xc001264780) Stream removed, broadcasting: 3 I0619 13:26:35.108087 6 log.go:172] (0xc001f40bb0) (0xc0005b5ea0) Stream removed, broadcasting: 5 Jun 19 13:26:35.108: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Jun 19 13:26:35.108: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4990 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 13:26:35.108: INFO: >>> kubeConfig: /root/.kube/config I0619 13:26:35.138032 6 log.go:172] (0xc001f41ad0) (0xc0001d83c0) Create stream I0619 13:26:35.138057 6 log.go:172] (0xc001f41ad0) (0xc0001d83c0) Stream added, broadcasting: 1 I0619 13:26:35.140736 6 log.go:172] (0xc001f41ad0) Reply frame received for 1 I0619 13:26:35.140771 6 log.go:172] (0xc001f41ad0) (0xc001264820) Create stream I0619 13:26:35.140782 6 log.go:172] (0xc001f41ad0) (0xc001264820) Stream added, broadcasting: 3 I0619 13:26:35.142012 6 log.go:172] (0xc001f41ad0) Reply frame received for 3 I0619 13:26:35.142051 6 log.go:172] (0xc001f41ad0) (0xc001264960) Create stream I0619 13:26:35.142065 6 log.go:172] (0xc001f41ad0) (0xc001264960) Stream added, broadcasting: 5 I0619 13:26:35.143062 6 log.go:172] (0xc001f41ad0) Reply frame received for 5 I0619 13:26:35.218919 6 log.go:172] (0xc001f41ad0) Data frame received for 3 I0619 13:26:35.218963 6 log.go:172] (0xc001264820) (3) Data frame handling I0619 13:26:35.218973 6 log.go:172] (0xc001264820) (3) Data frame sent I0619 13:26:35.218985 6 log.go:172] (0xc001f41ad0) Data frame received for 3 I0619 13:26:35.218991 6 log.go:172] (0xc001264820) (3) Data frame handling I0619 13:26:35.219021 6 log.go:172] (0xc001f41ad0) Data frame received for 5 I0619 13:26:35.219034 6 log.go:172] (0xc001264960) (5) Data frame handling I0619 13:26:35.220206 6 log.go:172] (0xc001f41ad0) Data frame received for 1 I0619 13:26:35.220232 6 log.go:172] (0xc0001d83c0) (1) Data frame handling I0619 13:26:35.220250 6 log.go:172] (0xc0001d83c0) (1) Data frame sent I0619 13:26:35.220289 6 log.go:172] (0xc001f41ad0) (0xc0001d83c0) Stream removed, broadcasting: 1 I0619 13:26:35.220423 6 log.go:172] (0xc001f41ad0) (0xc0001d83c0) Stream removed, broadcasting: 1 I0619 13:26:35.220443 6 log.go:172] (0xc001f41ad0) (0xc001264820) Stream removed, broadcasting: 3 I0619 13:26:35.220617 6 log.go:172] (0xc001f41ad0) Go away received I0619 13:26:35.220662 6 log.go:172] (0xc001f41ad0) (0xc001264960) Stream removed, broadcasting: 5 Jun 19 13:26:35.220: INFO: Exec stderr: "" Jun 19 13:26:35.220: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4990 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 13:26:35.220: INFO: >>> kubeConfig: /root/.kube/config I0619 13:26:35.251214 6 log.go:172] (0xc0019feb00) (0xc003000280) Create stream I0619 13:26:35.251244 6 log.go:172] (0xc0019feb00) (0xc003000280) Stream added, broadcasting: 1 I0619 13:26:35.253706 6 log.go:172] (0xc0019feb00) Reply frame received for 1 I0619 13:26:35.253746 6 log.go:172] (0xc0019feb00) (0xc0034ac000) Create stream I0619 13:26:35.253761 6 log.go:172] (0xc0019feb00) (0xc0034ac000) Stream added, broadcasting: 3 I0619 13:26:35.254996 6 log.go:172] (0xc0019feb00) Reply frame received for 3 I0619 13:26:35.255048 6 log.go:172] (0xc0019feb00) (0xc0034ac0a0) Create stream I0619 13:26:35.255063 6 log.go:172] (0xc0019feb00) (0xc0034ac0a0) Stream added, broadcasting: 5 I0619 13:26:35.255987 6 log.go:172] (0xc0019feb00) Reply frame received for 5 I0619 13:26:35.321634 6 log.go:172] (0xc0019feb00) Data frame received for 5 I0619 13:26:35.321667 6 log.go:172] (0xc0034ac0a0) (5) Data frame handling I0619 13:26:35.321689 6 log.go:172] (0xc0019feb00) Data frame received for 3 I0619 13:26:35.321697 6 log.go:172] (0xc0034ac000) (3) Data frame handling I0619 13:26:35.321730 6 log.go:172] (0xc0034ac000) (3) Data frame sent I0619 13:26:35.321738 6 log.go:172] (0xc0019feb00) Data frame received for 3 I0619 13:26:35.321744 6 log.go:172] (0xc0034ac000) (3) Data frame handling I0619 13:26:35.322767 6 log.go:172] (0xc0019feb00) Data frame received for 1 I0619 13:26:35.322776 6 log.go:172] (0xc003000280) (1) Data frame handling I0619 13:26:35.322782 6 log.go:172] (0xc003000280) (1) Data frame sent I0619 13:26:35.322926 6 log.go:172] (0xc0019feb00) (0xc003000280) Stream removed, broadcasting: 1 I0619 13:26:35.322985 6 log.go:172] (0xc0019feb00) Go away received I0619 13:26:35.323097 6 log.go:172] (0xc0019feb00) (0xc003000280) Stream removed, broadcasting: 1 I0619 13:26:35.323130 6 log.go:172] (0xc0019feb00) (0xc0034ac000) Stream removed, broadcasting: 3 I0619 13:26:35.323154 6 log.go:172] (0xc0019feb00) (0xc0034ac0a0) Stream removed, broadcasting: 5 Jun 19 13:26:35.323: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Jun 19 13:26:35.323: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4990 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 13:26:35.323: INFO: >>> kubeConfig: /root/.kube/config I0619 13:26:35.356026 6 log.go:172] (0xc0021cc8f0) (0xc0001d8aa0) Create stream I0619 13:26:35.356061 6 log.go:172] (0xc0021cc8f0) (0xc0001d8aa0) Stream added, broadcasting: 1 I0619 13:26:35.358885 6 log.go:172] (0xc0021cc8f0) Reply frame received for 1 I0619 13:26:35.358920 6 log.go:172] (0xc0021cc8f0) (0xc0034ac140) Create stream I0619 13:26:35.358932 6 log.go:172] (0xc0021cc8f0) (0xc0034ac140) Stream added, broadcasting: 3 I0619 13:26:35.359678 6 log.go:172] (0xc0021cc8f0) Reply frame received for 3 I0619 13:26:35.359709 6 log.go:172] (0xc0021cc8f0) (0xc001264a00) Create stream I0619 13:26:35.359726 6 log.go:172] (0xc0021cc8f0) (0xc001264a00) Stream added, broadcasting: 5 I0619 13:26:35.360548 6 log.go:172] (0xc0021cc8f0) Reply frame received for 5 I0619 13:26:35.408112 6 log.go:172] (0xc0021cc8f0) Data frame received for 5 I0619 13:26:35.408165 6 log.go:172] (0xc001264a00) (5) Data frame handling I0619 13:26:35.408203 6 log.go:172] (0xc0021cc8f0) Data frame received for 3 I0619 13:26:35.408281 6 log.go:172] (0xc0034ac140) (3) Data frame handling I0619 13:26:35.408325 6 log.go:172] (0xc0034ac140) (3) Data frame sent I0619 13:26:35.408346 6 log.go:172] (0xc0021cc8f0) Data frame received for 3 I0619 13:26:35.408364 6 log.go:172] (0xc0034ac140) (3) Data frame handling I0619 13:26:35.410373 6 log.go:172] (0xc0021cc8f0) Data frame received for 1 I0619 13:26:35.410404 6 log.go:172] (0xc0001d8aa0) (1) Data frame handling I0619 13:26:35.410422 6 log.go:172] (0xc0001d8aa0) (1) Data frame sent I0619 13:26:35.410442 6 log.go:172] (0xc0021cc8f0) (0xc0001d8aa0) Stream removed, broadcasting: 1 I0619 13:26:35.410461 6 log.go:172] (0xc0021cc8f0) Go away received I0619 13:26:35.410633 6 log.go:172] (0xc0021cc8f0) (0xc0001d8aa0) Stream removed, broadcasting: 1 I0619 13:26:35.410660 6 log.go:172] (0xc0021cc8f0) (0xc0034ac140) Stream removed, broadcasting: 3 I0619 13:26:35.410671 6 log.go:172] (0xc0021cc8f0) (0xc001264a00) Stream removed, broadcasting: 5 Jun 19 13:26:35.410: INFO: Exec stderr: "" Jun 19 13:26:35.410: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4990 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 13:26:35.410: INFO: >>> kubeConfig: /root/.kube/config I0619 13:26:35.444151 6 log.go:172] (0xc0019ff810) (0xc003000820) Create stream I0619 13:26:35.444178 6 log.go:172] (0xc0019ff810) (0xc003000820) Stream added, broadcasting: 1 I0619 13:26:35.447522 6 log.go:172] (0xc0019ff810) Reply frame received for 1 I0619 13:26:35.447602 6 log.go:172] (0xc0019ff810) (0xc0025943c0) Create stream I0619 13:26:35.447618 6 log.go:172] (0xc0019ff810) (0xc0025943c0) Stream added, broadcasting: 3 I0619 13:26:35.448842 6 log.go:172] (0xc0019ff810) Reply frame received for 3 I0619 13:26:35.448905 6 log.go:172] (0xc0019ff810) (0xc0034ac1e0) Create stream I0619 13:26:35.448936 6 log.go:172] (0xc0019ff810) (0xc0034ac1e0) Stream added, broadcasting: 5 I0619 13:26:35.450422 6 log.go:172] (0xc0019ff810) Reply frame received for 5 I0619 13:26:35.526623 6 log.go:172] (0xc0019ff810) Data frame received for 5 I0619 13:26:35.526652 6 log.go:172] (0xc0034ac1e0) (5) Data frame handling I0619 13:26:35.526670 6 log.go:172] (0xc0019ff810) Data frame received for 3 I0619 13:26:35.526680 6 log.go:172] (0xc0025943c0) (3) Data frame handling I0619 13:26:35.526691 6 log.go:172] (0xc0025943c0) (3) Data frame sent I0619 13:26:35.526699 6 log.go:172] (0xc0019ff810) Data frame received for 3 I0619 13:26:35.526711 6 log.go:172] (0xc0025943c0) (3) Data frame handling I0619 13:26:35.528510 6 log.go:172] (0xc0019ff810) Data frame received for 1 I0619 13:26:35.528524 6 log.go:172] (0xc003000820) (1) Data frame handling I0619 13:26:35.528531 6 log.go:172] (0xc003000820) (1) Data frame sent I0619 13:26:35.528539 6 log.go:172] (0xc0019ff810) (0xc003000820) Stream removed, broadcasting: 1 I0619 13:26:35.528552 6 log.go:172] (0xc0019ff810) Go away received I0619 13:26:35.528707 6 log.go:172] (0xc0019ff810) (0xc003000820) Stream removed, broadcasting: 1 I0619 13:26:35.528755 6 log.go:172] (0xc0019ff810) (0xc0025943c0) Stream removed, broadcasting: 3 I0619 13:26:35.528825 6 log.go:172] (0xc0019ff810) (0xc0034ac1e0) Stream removed, broadcasting: 5 Jun 19 13:26:35.528: INFO: Exec stderr: "" Jun 19 13:26:35.528: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4990 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 13:26:35.528: INFO: >>> kubeConfig: /root/.kube/config I0619 13:26:35.561567 6 log.go:172] (0xc002670d10) (0xc001264fa0) Create stream I0619 13:26:35.561599 6 log.go:172] (0xc002670d10) (0xc001264fa0) Stream added, broadcasting: 1 I0619 13:26:35.564517 6 log.go:172] (0xc002670d10) Reply frame received for 1 I0619 13:26:35.564563 6 log.go:172] (0xc002670d10) (0xc003000960) Create stream I0619 13:26:35.564578 6 log.go:172] (0xc002670d10) (0xc003000960) Stream added, broadcasting: 3 I0619 13:26:35.565883 6 log.go:172] (0xc002670d10) Reply frame received for 3 I0619 13:26:35.565941 6 log.go:172] (0xc002670d10) (0xc0034ac280) Create stream I0619 13:26:35.565966 6 log.go:172] (0xc002670d10) (0xc0034ac280) Stream added, broadcasting: 5 I0619 13:26:35.567015 6 log.go:172] (0xc002670d10) Reply frame received for 5 I0619 13:26:35.621517 6 log.go:172] (0xc002670d10) Data frame received for 3 I0619 13:26:35.621569 6 log.go:172] (0xc003000960) (3) Data frame handling I0619 13:26:35.621591 6 log.go:172] (0xc003000960) (3) Data frame sent I0619 13:26:35.621605 6 log.go:172] (0xc002670d10) Data frame received for 3 I0619 13:26:35.621618 6 log.go:172] (0xc003000960) (3) Data frame handling I0619 13:26:35.621664 6 log.go:172] (0xc002670d10) Data frame received for 5 I0619 13:26:35.621689 6 log.go:172] (0xc0034ac280) (5) Data frame handling I0619 13:26:35.623257 6 log.go:172] (0xc002670d10) Data frame received for 1 I0619 13:26:35.623281 6 log.go:172] (0xc001264fa0) (1) Data frame handling I0619 13:26:35.623297 6 log.go:172] (0xc001264fa0) (1) Data frame sent I0619 13:26:35.623311 6 log.go:172] (0xc002670d10) (0xc001264fa0) Stream removed, broadcasting: 1 I0619 13:26:35.623413 6 log.go:172] (0xc002670d10) (0xc001264fa0) Stream removed, broadcasting: 1 I0619 13:26:35.623444 6 log.go:172] (0xc002670d10) (0xc003000960) Stream removed, broadcasting: 3 I0619 13:26:35.623566 6 log.go:172] (0xc002670d10) Go away received I0619 13:26:35.623746 6 log.go:172] (0xc002670d10) (0xc0034ac280) Stream removed, broadcasting: 5 Jun 19 13:26:35.623: INFO: Exec stderr: "" Jun 19 13:26:35.623: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4990 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 13:26:35.623: INFO: >>> kubeConfig: /root/.kube/config I0619 13:26:35.655587 6 log.go:172] (0xc0026718c0) (0xc0012654a0) Create stream I0619 13:26:35.655616 6 log.go:172] (0xc0026718c0) (0xc0012654a0) Stream added, broadcasting: 1 I0619 13:26:35.658927 6 log.go:172] (0xc0026718c0) Reply frame received for 1 I0619 13:26:35.658963 6 log.go:172] (0xc0026718c0) (0xc003000a00) Create stream I0619 13:26:35.658976 6 log.go:172] (0xc0026718c0) (0xc003000a00) Stream added, broadcasting: 3 I0619 13:26:35.660103 6 log.go:172] (0xc0026718c0) Reply frame received for 3 I0619 13:26:35.660148 6 log.go:172] (0xc0026718c0) (0xc003000aa0) Create stream I0619 13:26:35.660166 6 log.go:172] (0xc0026718c0) (0xc003000aa0) Stream added, broadcasting: 5 I0619 13:26:35.661367 6 log.go:172] (0xc0026718c0) Reply frame received for 5 I0619 13:26:35.720236 6 log.go:172] (0xc0026718c0) Data frame received for 5 I0619 13:26:35.720275 6 log.go:172] (0xc003000aa0) (5) Data frame handling I0619 13:26:35.720299 6 log.go:172] (0xc0026718c0) Data frame received for 3 I0619 13:26:35.720314 6 log.go:172] (0xc003000a00) (3) Data frame handling I0619 13:26:35.720342 6 log.go:172] (0xc003000a00) (3) Data frame sent I0619 13:26:35.720353 6 log.go:172] (0xc0026718c0) Data frame received for 3 I0619 13:26:35.720365 6 log.go:172] (0xc003000a00) (3) Data frame handling I0619 13:26:35.721717 6 log.go:172] (0xc0026718c0) Data frame received for 1 I0619 13:26:35.721742 6 log.go:172] (0xc0012654a0) (1) Data frame handling I0619 13:26:35.721782 6 log.go:172] (0xc0012654a0) (1) Data frame sent I0619 13:26:35.722090 6 log.go:172] (0xc0026718c0) (0xc0012654a0) Stream removed, broadcasting: 1 I0619 13:26:35.722146 6 log.go:172] (0xc0026718c0) Go away received I0619 13:26:35.722188 6 log.go:172] (0xc0026718c0) (0xc0012654a0) Stream removed, broadcasting: 1 I0619 13:26:35.722208 6 log.go:172] (0xc0026718c0) (0xc003000a00) Stream removed, broadcasting: 3 I0619 13:26:35.722221 6 log.go:172] (0xc0026718c0) (0xc003000aa0) Stream removed, broadcasting: 5 Jun 19 13:26:35.722: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:26:35.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4990" for this suite. Jun 19 13:27:25.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:27:25.818: INFO: namespace e2e-kubelet-etc-hosts-4990 deletion completed in 50.091780018s • [SLOW TEST:61.359 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:27:25.818: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 19 13:27:32.424: INFO: Successfully updated pod "labelsupdateb671d2ab-7eeb-4285-9692-920a4910d7c6" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:27:34.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2820" for this suite. Jun 19 13:27:56.488: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:27:56.580: INFO: namespace projected-2820 deletion completed in 22.106225769s • [SLOW TEST:30.761 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:27:56.581: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 19 13:28:00.736: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:28:00.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6740" for this suite. Jun 19 13:28:06.854: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:28:06.934: INFO: namespace container-runtime-6740 deletion completed in 6.093636673s • [SLOW TEST:10.353 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:28:06.935: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-eb60e48d-d25e-49ef-a242-0e77ce0c41e0 STEP: Creating a pod to test consume configMaps Jun 19 13:28:06.994: INFO: Waiting up to 5m0s for pod "pod-configmaps-6c12bf6c-80db-42a5-9678-ee97ada64e39" in namespace "configmap-1385" to be "success or failure" Jun 19 13:28:07.029: INFO: Pod "pod-configmaps-6c12bf6c-80db-42a5-9678-ee97ada64e39": Phase="Pending", Reason="", readiness=false. Elapsed: 35.317326ms Jun 19 13:28:09.033: INFO: Pod "pod-configmaps-6c12bf6c-80db-42a5-9678-ee97ada64e39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039503246s Jun 19 13:28:11.038: INFO: Pod "pod-configmaps-6c12bf6c-80db-42a5-9678-ee97ada64e39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044171098s STEP: Saw pod success Jun 19 13:28:11.038: INFO: Pod "pod-configmaps-6c12bf6c-80db-42a5-9678-ee97ada64e39" satisfied condition "success or failure" Jun 19 13:28:11.040: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6c12bf6c-80db-42a5-9678-ee97ada64e39 container configmap-volume-test: STEP: delete the pod Jun 19 13:28:11.110: INFO: Waiting for pod pod-configmaps-6c12bf6c-80db-42a5-9678-ee97ada64e39 to disappear Jun 19 13:28:11.125: INFO: Pod pod-configmaps-6c12bf6c-80db-42a5-9678-ee97ada64e39 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:28:11.125: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1385" for this suite. Jun 19 13:28:17.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:28:17.289: INFO: namespace configmap-1385 deletion completed in 6.159902922s • [SLOW TEST:10.355 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:28:17.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-68850387-78e2-40f5-9d93-34fa7fff9b55 STEP: Creating a pod to test consume configMaps Jun 19 13:28:17.384: INFO: Waiting up to 5m0s for pod "pod-configmaps-189cd9ce-7a06-4012-a3e5-cd39b4f713c9" in namespace "configmap-2198" to be "success or failure" Jun 19 13:28:17.401: INFO: Pod "pod-configmaps-189cd9ce-7a06-4012-a3e5-cd39b4f713c9": Phase="Pending", Reason="", readiness=false. Elapsed: 16.891426ms Jun 19 13:28:19.406: INFO: Pod "pod-configmaps-189cd9ce-7a06-4012-a3e5-cd39b4f713c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021607205s Jun 19 13:28:21.410: INFO: Pod "pod-configmaps-189cd9ce-7a06-4012-a3e5-cd39b4f713c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025763462s STEP: Saw pod success Jun 19 13:28:21.410: INFO: Pod "pod-configmaps-189cd9ce-7a06-4012-a3e5-cd39b4f713c9" satisfied condition "success or failure" Jun 19 13:28:21.413: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-189cd9ce-7a06-4012-a3e5-cd39b4f713c9 container configmap-volume-test: STEP: delete the pod Jun 19 13:28:21.446: INFO: Waiting for pod pod-configmaps-189cd9ce-7a06-4012-a3e5-cd39b4f713c9 to disappear Jun 19 13:28:21.455: INFO: Pod pod-configmaps-189cd9ce-7a06-4012-a3e5-cd39b4f713c9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:28:21.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2198" for this suite. Jun 19 13:28:27.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:28:27.637: INFO: namespace configmap-2198 deletion completed in 6.178357322s • [SLOW TEST:10.347 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:28:27.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jun 19 13:28:32.230: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4950 pod-service-account-e63db086-76be-4d3f-8fd2-6791372f0cdf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jun 19 13:28:32.468: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4950 pod-service-account-e63db086-76be-4d3f-8fd2-6791372f0cdf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jun 19 13:28:32.666: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4950 pod-service-account-e63db086-76be-4d3f-8fd2-6791372f0cdf -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:28:32.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4950" for this suite. Jun 19 13:28:38.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:28:38.995: INFO: namespace svcaccounts-4950 deletion completed in 6.110587111s • [SLOW TEST:11.357 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:28:38.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 19 13:28:39.051: INFO: Waiting up to 5m0s for pod "pod-11199b37-a6f1-4892-ae56-f6028e2e3ba8" in namespace "emptydir-1511" to be "success or failure" Jun 19 13:28:39.065: INFO: Pod "pod-11199b37-a6f1-4892-ae56-f6028e2e3ba8": Phase="Pending", Reason="", readiness=false. Elapsed: 13.753712ms Jun 19 13:28:41.069: INFO: Pod "pod-11199b37-a6f1-4892-ae56-f6028e2e3ba8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018067427s Jun 19 13:28:43.073: INFO: Pod "pod-11199b37-a6f1-4892-ae56-f6028e2e3ba8": Phase="Running", Reason="", readiness=true. Elapsed: 4.022163592s Jun 19 13:28:45.079: INFO: Pod "pod-11199b37-a6f1-4892-ae56-f6028e2e3ba8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.028027384s STEP: Saw pod success Jun 19 13:28:45.079: INFO: Pod "pod-11199b37-a6f1-4892-ae56-f6028e2e3ba8" satisfied condition "success or failure" Jun 19 13:28:45.082: INFO: Trying to get logs from node iruya-worker pod pod-11199b37-a6f1-4892-ae56-f6028e2e3ba8 container test-container: STEP: delete the pod Jun 19 13:28:45.103: INFO: Waiting for pod pod-11199b37-a6f1-4892-ae56-f6028e2e3ba8 to disappear Jun 19 13:28:45.107: INFO: Pod pod-11199b37-a6f1-4892-ae56-f6028e2e3ba8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:28:45.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1511" for this suite. Jun 19 13:28:51.137: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:28:51.215: INFO: namespace emptydir-1511 deletion completed in 6.10494894s • [SLOW TEST:12.220 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:28:51.215: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5157 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-5157 STEP: Creating statefulset with conflicting port in namespace statefulset-5157 STEP: Waiting until pod test-pod will start running in namespace statefulset-5157 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5157 Jun 19 13:28:55.378: INFO: Observed stateful pod in namespace: statefulset-5157, name: ss-0, uid: c0dda253-e4ec-4906-80da-9687e48ba65a, status phase: Pending. Waiting for statefulset controller to delete. Jun 19 13:29:02.149: INFO: Observed stateful pod in namespace: statefulset-5157, name: ss-0, uid: c0dda253-e4ec-4906-80da-9687e48ba65a, status phase: Failed. Waiting for statefulset controller to delete. Jun 19 13:29:02.176: INFO: Observed stateful pod in namespace: statefulset-5157, name: ss-0, uid: c0dda253-e4ec-4906-80da-9687e48ba65a, status phase: Failed. Waiting for statefulset controller to delete. Jun 19 13:29:02.194: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5157 STEP: Removing pod with conflicting port in namespace statefulset-5157 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5157 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 19 13:34:02.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-5157' Jun 19 13:34:05.199: INFO: stderr: "" Jun 19 13:34:05.199: INFO: stdout: "Name: ss-0\nNamespace: statefulset-5157\nPriority: 0\nNode: iruya-worker/\nLabels: baz=blah\n controller-revision-hash=ss-5867494796\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: \nStatus: Pending\nIP: \nControlled By: StatefulSet/ss\nContainers:\n nginx:\n Image: docker.io/library/nginx:1.14-alpine\n Port: 21017/TCP\n Host Port: 21017/TCP\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-dchb9 (ro)\nVolumes:\n default-token-dchb9:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-dchb9\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulled 5m2s kubelet, iruya-worker Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n Normal Created 5m1s kubelet, iruya-worker Created container nginx\n Normal Started 5m1s kubelet, iruya-worker Started container nginx\n" Jun 19 13:34:05.199: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-5157 Priority: 0 Node: iruya-worker/ Labels: baz=blah controller-revision-hash=ss-5867494796 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: Status: Pending IP: Controlled By: StatefulSet/ss Containers: nginx: Image: docker.io/library/nginx:1.14-alpine Port: 21017/TCP Host Port: 21017/TCP Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-dchb9 (ro) Volumes: default-token-dchb9: Type: Secret (a volume populated by a Secret) SecretName: default-token-dchb9 Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 5m2s kubelet, iruya-worker Container image "docker.io/library/nginx:1.14-alpine" already present on machine Normal Created 5m1s kubelet, iruya-worker Created container nginx Normal Started 5m1s kubelet, iruya-worker Started container nginx Jun 19 13:34:05.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-5157 --tail=100' Jun 19 13:34:05.316: INFO: stderr: "" Jun 19 13:34:05.316: INFO: stdout: "" Jun 19 13:34:05.316: INFO: Last 100 log lines of ss-0: Jun 19 13:34:05.316: INFO: Deleting all statefulset in ns statefulset-5157 Jun 19 13:34:05.319: INFO: Scaling statefulset ss to 0 Jun 19 13:34:15.338: INFO: Waiting for statefulset status.replicas updated to 0 Jun 19 13:34:15.340: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Collecting events from namespace "statefulset-5157". STEP: Found 17 events. Jun 19 13:34:15.358: INFO: At 2020-06-19 13:28:51 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Jun 19 13:34:15.358: INFO: At 2020-06-19 13:28:51 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Jun 19 13:34:15.358: INFO: At 2020-06-19 13:28:51 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-5157/ss is recreating failed Pod ss-0 Jun 19 13:34:15.358: INFO: At 2020-06-19 13:28:51 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Jun 19 13:34:15.358: INFO: At 2020-06-19 13:28:51 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Jun 19 13:34:15.358: INFO: At 2020-06-19 13:28:51 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Jun 19 13:34:15.358: INFO: At 2020-06-19 13:28:52 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Jun 19 13:34:15.358: INFO: At 2020-06-19 13:28:52 +0000 UTC - event for test-pod: {kubelet iruya-worker} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine Jun 19 13:34:15.358: INFO: At 2020-06-19 13:28:53 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Jun 19 13:34:15.358: INFO: At 2020-06-19 13:28:53 +0000 UTC - event for test-pod: {kubelet iruya-worker} Created: Created container nginx Jun 19 13:34:15.358: INFO: At 2020-06-19 13:28:54 +0000 UTC - event for test-pod: {kubelet iruya-worker} Started: Started container nginx Jun 19 13:34:15.358: INFO: At 2020-06-19 13:29:02 +0000 UTC - event for ss-0: {kubelet iruya-worker} PodFitsHostPorts: Predicate PodFitsHostPorts failed Jun 19 13:34:15.358: INFO: At 2020-06-19 13:29:02 +0000 UTC - event for test-pod: {kubelet iruya-worker} Killing: Stopping container nginx Jun 19 13:34:15.358: INFO: At 2020-06-19 13:29:03 +0000 UTC - event for ss-0: {kubelet iruya-worker} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine Jun 19 13:34:15.358: INFO: At 2020-06-19 13:29:04 +0000 UTC - event for ss-0: {kubelet iruya-worker} Created: Created container nginx Jun 19 13:34:15.358: INFO: At 2020-06-19 13:29:04 +0000 UTC - event for ss-0: {kubelet iruya-worker} Started: Started container nginx Jun 19 13:34:15.358: INFO: At 2020-06-19 13:34:05 +0000 UTC - event for ss-0: {kubelet iruya-worker} Killing: Stopping container nginx Jun 19 13:34:15.360: INFO: POD NODE PHASE GRACE CONDITIONS Jun 19 13:34:15.360: INFO: Jun 19 13:34:15.367: INFO: Logging node info for node iruya-control-plane Jun 19 13:34:15.369: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-control-plane,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-control-plane,UID:5b69a0f9-55ac-48be-a8d0-5e04b939b798,ResourceVersion:17319462,Generation:0,CreationTimestamp:2020-03-15 18:24:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-control-plane,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[{node-role.kubernetes.io/master NoSchedule }],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-06-19 13:33:26 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-06-19 13:33:26 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-06-19 13:33:26 +0000 UTC 2020-03-15 18:24:20 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-06-19 13:33:26 +0000 UTC 2020-03-15 18:25:00 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.7} {Hostname iruya-control-plane}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:09f14f6f4d1640fcaab2243401c9f154,SystemUUID:7c6ca533-492e-400c-b058-c282f97a69ec,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[k8s.gcr.io/pause:3.1] 746479}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Jun 19 13:34:15.369: INFO: Logging kubelet events for node iruya-control-plane Jun 19 13:34:15.371: INFO: Logging pods the kubelet thinks is on node iruya-control-plane Jun 19 13:34:15.378: INFO: local-path-provisioner-d4947b89c-72frh started at 2020-03-15 18:25:04 +0000 UTC (0+1 container statuses recorded) Jun 19 13:34:15.378: INFO: Container local-path-provisioner ready: true, restart count 75 Jun 19 13:34:15.378: INFO: kube-apiserver-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) Jun 19 13:34:15.378: INFO: Container kube-apiserver ready: true, restart count 1 Jun 19 13:34:15.378: INFO: kube-controller-manager-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) Jun 19 13:34:15.378: INFO: Container kube-controller-manager ready: true, restart count 71 Jun 19 13:34:15.378: INFO: kube-scheduler-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) Jun 19 13:34:15.378: INFO: Container kube-scheduler ready: true, restart count 72 Jun 19 13:34:15.378: INFO: etcd-iruya-control-plane started at 2020-03-15 18:24:08 +0000 UTC (0+1 container statuses recorded) Jun 19 13:34:15.378: INFO: Container etcd ready: true, restart count 0 Jun 19 13:34:15.378: INFO: kindnet-zn8sx started at 2020-03-15 18:24:40 +0000 UTC (0+1 container statuses recorded) Jun 19 13:34:15.378: INFO: Container kindnet-cni ready: true, restart count 1 Jun 19 13:34:15.378: INFO: kube-proxy-46nsr started at 2020-03-15 18:24:40 +0000 UTC (0+1 container statuses recorded) Jun 19 13:34:15.378: INFO: Container kube-proxy ready: true, restart count 0 W0619 13:34:15.380701 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 19 13:34:15.512: INFO: Latency metrics for node iruya-control-plane Jun 19 13:34:15.512: INFO: Logging node info for node iruya-worker Jun 19 13:34:15.516: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-worker,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-worker,UID:94e58020-6063-4274-b0bd-d7c4f772701c,ResourceVersion:17319516,Generation:0,CreationTimestamp:2020-03-15 18:24:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-worker,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.2.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-06-19 13:33:53 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-06-19 13:33:53 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-06-19 13:33:53 +0000 UTC 2020-03-15 18:24:54 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-06-19 13:33:53 +0000 UTC 2020-03-15 18:25:15 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.6} {Hostname iruya-worker}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5332b21b7d0c4f35b2434f4fc8bea1cf,SystemUUID:92e1ff09-3c3c-490b-b499-0de27dc489ae,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 16222606} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 2258365} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Jun 19 13:34:15.517: INFO: Logging kubelet events for node iruya-worker Jun 19 13:34:15.521: INFO: Logging pods the kubelet thinks is on node iruya-worker Jun 19 13:34:15.526: INFO: kube-proxy-pmz4p started at 2020-03-15 18:24:55 +0000 UTC (0+1 container statuses recorded) Jun 19 13:34:15.526: INFO: Container kube-proxy ready: true, restart count 0 Jun 19 13:34:15.526: INFO: kindnet-gwz5g started at 2020-03-15 18:24:55 +0000 UTC (0+1 container statuses recorded) Jun 19 13:34:15.526: INFO: Container kindnet-cni ready: true, restart count 2 W0619 13:34:15.529293 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 19 13:34:15.572: INFO: Latency metrics for node iruya-worker Jun 19 13:34:15.572: INFO: Logging node info for node iruya-worker2 Jun 19 13:34:15.575: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-worker2,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-worker2,UID:67dfdf76-d64a-45cb-a2a9-755b73c85644,ResourceVersion:17319525,Generation:0,CreationTimestamp:2020-03-15 18:24:41 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-worker2,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.244.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{16 0} {} 16 DecimalSI},ephemeral-storage: {{2358466523136 0} {} 2303189964Ki BinarySI},hugepages-1Gi: {{0 0} {} 0 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{134922129408 0} {} 131759892Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{MemoryPressure False 2020-06-19 13:33:59 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-06-19 13:33:59 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-06-19 13:33:59 +0000 UTC 2020-03-15 18:24:41 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-06-19 13:33:59 +0000 UTC 2020-03-15 18:24:52 +0000 UTC KubeletReady kubelet is posting ready status}],Addresses:[{InternalIP 172.17.0.5} {Hostname iruya-worker2}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:5fda03f0d02548b7a74f8a4b6cc8795b,SystemUUID:d8b7a3a5-76b4-4c0b-85d7-cdb97f2c8b1a,BootID:ca2aa731-f890-4956-92a1-ff8c7560d571,KernelVersion:4.15.0-88-generic,OSImage:Ubuntu 19.10,ContainerRuntimeVersion:containerd://1.3.2,KubeletVersion:v1.15.7,KubeProxyVersion:v1.15.7,OperatingSystem:linux,Architecture:amd64,},Images:[{[k8s.gcr.io/etcd:3.3.10] 258352566} {[k8s.gcr.io/kube-apiserver:v1.15.7] 249088818} {[k8s.gcr.io/kube-controller-manager:v1.15.7] 199886660} {[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 142444388} {[docker.io/kindest/kindnetd:0.5.4] 113207016} {[k8s.gcr.io/kube-proxy:v1.15.7] 97350830} {[k8s.gcr.io/kube-scheduler:v1.15.7] 96554801} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 85425365} {[k8s.gcr.io/debian-base:v2.0.0] 53884301} {[k8s.gcr.io/coredns:1.3.1] 40532446} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 36655159} {[docker.io/rancher/local-path-provisioner:v0.0.11] 36513375} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 16222606} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 7398578} {[docker.io/library/nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 docker.io/library/nginx:1.15-alpine] 6999654} {[docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker.io/library/nginx:1.14-alpine] 6978806} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 4331310} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 3854313} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 2943605} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 2785431} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 2509546} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 2258365} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 1804628} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 1799936} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 1791163} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 1772917} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 1039914} {[k8s.gcr.io/pause:3.1] 746479} {[docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 docker.io/library/busybox:1.29] 732685} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 599341} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 539309}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Jun 19 13:34:15.576: INFO: Logging kubelet events for node iruya-worker2 Jun 19 13:34:15.579: INFO: Logging pods the kubelet thinks is on node iruya-worker2 Jun 19 13:34:15.586: INFO: coredns-5d4dd4b4db-gm7vr started at 2020-03-15 18:24:52 +0000 UTC (0+1 container statuses recorded) Jun 19 13:34:15.586: INFO: Container coredns ready: true, restart count 0 Jun 19 13:34:15.586: INFO: coredns-5d4dd4b4db-6jcgz started at 2020-03-15 18:24:54 +0000 UTC (0+1 container statuses recorded) Jun 19 13:34:15.586: INFO: Container coredns ready: true, restart count 0 Jun 19 13:34:15.586: INFO: kube-proxy-vwbcj started at 2020-03-15 18:24:42 +0000 UTC (0+1 container statuses recorded) Jun 19 13:34:15.586: INFO: Container kube-proxy ready: true, restart count 0 Jun 19 13:34:15.586: INFO: kindnet-mgd8b started at 2020-03-15 18:24:43 +0000 UTC (0+1 container statuses recorded) Jun 19 13:34:15.586: INFO: Container kindnet-cni ready: true, restart count 2 W0619 13:34:15.589827 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 19 13:34:15.652: INFO: Latency metrics for node iruya-worker2 Jun 19 13:34:15.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5157" for this suite. Jun 19 13:34:21.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:34:21.770: INFO: namespace statefulset-5157 deletion completed in 6.11167073s • Failure [330.555 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Timed out after 300.000s. Expected <*errors.errorString | 0xc0013425e0>: { s: "Pod ss-0 is not in running phase: Pending", } to be nil /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:789 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:34:21.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-231ea8c6-6d44-4aab-95ef-41416ae55cf2 Jun 19 13:34:21.837: INFO: Pod name my-hostname-basic-231ea8c6-6d44-4aab-95ef-41416ae55cf2: Found 0 pods out of 1 Jun 19 13:34:26.842: INFO: Pod name my-hostname-basic-231ea8c6-6d44-4aab-95ef-41416ae55cf2: Found 1 pods out of 1 Jun 19 13:34:26.842: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-231ea8c6-6d44-4aab-95ef-41416ae55cf2" are running Jun 19 13:34:26.845: INFO: Pod "my-hostname-basic-231ea8c6-6d44-4aab-95ef-41416ae55cf2-7pzjz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-19 13:34:21 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-19 13:34:25 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-19 13:34:25 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-19 13:34:21 +0000 UTC Reason: Message:}]) Jun 19 13:34:26.845: INFO: Trying to dial the pod Jun 19 13:34:31.863: INFO: Controller my-hostname-basic-231ea8c6-6d44-4aab-95ef-41416ae55cf2: Got expected result from replica 1 [my-hostname-basic-231ea8c6-6d44-4aab-95ef-41416ae55cf2-7pzjz]: "my-hostname-basic-231ea8c6-6d44-4aab-95ef-41416ae55cf2-7pzjz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:34:31.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5936" for this suite. Jun 19 13:34:37.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:34:37.963: INFO: namespace replication-controller-5936 deletion completed in 6.097408784s • [SLOW TEST:16.193 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:34:37.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-266ce28e-65f8-4410-b31d-9da5cbd5b4b0 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:34:38.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5749" for this suite. Jun 19 13:34:44.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:34:44.162: INFO: namespace configmap-5749 deletion completed in 6.136466134s • [SLOW TEST:6.199 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:34:44.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0619 13:35:14.343286 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 19 13:35:14.343: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:35:14.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6274" for this suite. Jun 19 13:35:20.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:35:20.548: INFO: namespace gc-6274 deletion completed in 6.20166046s • [SLOW TEST:36.385 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:35:20.548: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-f4d5aaa2-d524-4e90-af22-7ba656c22463 STEP: Creating secret with name secret-projected-all-test-volume-0cf7b940-c873-4454-b71c-5665205838da STEP: Creating a pod to test Check all projections for projected volume plugin Jun 19 13:35:20.648: INFO: Waiting up to 5m0s for pod "projected-volume-b2cf6619-5a4b-4fde-b30b-1ff8bcbe711c" in namespace "projected-1556" to be "success or failure" Jun 19 13:35:20.666: INFO: Pod "projected-volume-b2cf6619-5a4b-4fde-b30b-1ff8bcbe711c": Phase="Pending", Reason="", readiness=false. Elapsed: 17.83904ms Jun 19 13:35:22.670: INFO: Pod "projected-volume-b2cf6619-5a4b-4fde-b30b-1ff8bcbe711c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02196406s Jun 19 13:35:24.675: INFO: Pod "projected-volume-b2cf6619-5a4b-4fde-b30b-1ff8bcbe711c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027038112s STEP: Saw pod success Jun 19 13:35:24.675: INFO: Pod "projected-volume-b2cf6619-5a4b-4fde-b30b-1ff8bcbe711c" satisfied condition "success or failure" Jun 19 13:35:24.678: INFO: Trying to get logs from node iruya-worker pod projected-volume-b2cf6619-5a4b-4fde-b30b-1ff8bcbe711c container projected-all-volume-test: STEP: delete the pod Jun 19 13:35:24.699: INFO: Waiting for pod projected-volume-b2cf6619-5a4b-4fde-b30b-1ff8bcbe711c to disappear Jun 19 13:35:24.703: INFO: Pod projected-volume-b2cf6619-5a4b-4fde-b30b-1ff8bcbe711c no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:35:24.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1556" for this suite. Jun 19 13:35:30.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:35:30.803: INFO: namespace projected-1556 deletion completed in 6.097099139s • [SLOW TEST:10.255 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:35:30.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-01b80c60-267b-4c60-9bc8-da9f10bc1623 STEP: Creating a pod to test consume configMaps Jun 19 13:35:30.884: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-95954c40-e8d2-42c7-8531-ebc127cdcd7b" in namespace "projected-8724" to be "success or failure" Jun 19 13:35:30.895: INFO: Pod "pod-projected-configmaps-95954c40-e8d2-42c7-8531-ebc127cdcd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 11.575568ms Jun 19 13:35:32.900: INFO: Pod "pod-projected-configmaps-95954c40-e8d2-42c7-8531-ebc127cdcd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01573692s Jun 19 13:35:34.905: INFO: Pod "pod-projected-configmaps-95954c40-e8d2-42c7-8531-ebc127cdcd7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020728311s STEP: Saw pod success Jun 19 13:35:34.905: INFO: Pod "pod-projected-configmaps-95954c40-e8d2-42c7-8531-ebc127cdcd7b" satisfied condition "success or failure" Jun 19 13:35:34.908: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-95954c40-e8d2-42c7-8531-ebc127cdcd7b container projected-configmap-volume-test: STEP: delete the pod Jun 19 13:35:34.927: INFO: Waiting for pod pod-projected-configmaps-95954c40-e8d2-42c7-8531-ebc127cdcd7b to disappear Jun 19 13:35:34.931: INFO: Pod pod-projected-configmaps-95954c40-e8d2-42c7-8531-ebc127cdcd7b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:35:34.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8724" for this suite. Jun 19 13:35:40.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:35:41.038: INFO: namespace projected-8724 deletion completed in 6.103571878s • [SLOW TEST:10.234 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:35:41.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 13:35:41.134: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a7eab70-b575-47a9-96a9-a6af39af43f1" in namespace "projected-5421" to be "success or failure" Jun 19 13:35:41.141: INFO: Pod "downwardapi-volume-8a7eab70-b575-47a9-96a9-a6af39af43f1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.565181ms Jun 19 13:35:43.145: INFO: Pod "downwardapi-volume-8a7eab70-b575-47a9-96a9-a6af39af43f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010731174s Jun 19 13:35:45.150: INFO: Pod "downwardapi-volume-8a7eab70-b575-47a9-96a9-a6af39af43f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015929693s STEP: Saw pod success Jun 19 13:35:45.150: INFO: Pod "downwardapi-volume-8a7eab70-b575-47a9-96a9-a6af39af43f1" satisfied condition "success or failure" Jun 19 13:35:45.153: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8a7eab70-b575-47a9-96a9-a6af39af43f1 container client-container: STEP: delete the pod Jun 19 13:35:45.170: INFO: Waiting for pod downwardapi-volume-8a7eab70-b575-47a9-96a9-a6af39af43f1 to disappear Jun 19 13:35:45.174: INFO: Pod downwardapi-volume-8a7eab70-b575-47a9-96a9-a6af39af43f1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:35:45.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5421" for this suite. Jun 19 13:35:51.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:35:51.267: INFO: namespace projected-5421 deletion completed in 6.089368165s • [SLOW TEST:10.228 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:35:51.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 13:35:51.352: INFO: Create a RollingUpdate DaemonSet Jun 19 13:35:51.355: INFO: Check that daemon pods launch on every node of the cluster Jun 19 13:35:51.396: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:35:51.399: INFO: Number of nodes with available pods: 0 Jun 19 13:35:51.399: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:35:52.404: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:35:52.407: INFO: Number of nodes with available pods: 0 Jun 19 13:35:52.407: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:35:53.403: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:35:53.407: INFO: Number of nodes with available pods: 0 Jun 19 13:35:53.407: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:35:54.426: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:35:54.637: INFO: Number of nodes with available pods: 0 Jun 19 13:35:54.637: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:35:55.403: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:35:55.407: INFO: Number of nodes with available pods: 1 Jun 19 13:35:55.407: INFO: Node iruya-worker2 is running more than one daemon pod Jun 19 13:35:56.404: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:35:56.408: INFO: Number of nodes with available pods: 2 Jun 19 13:35:56.408: INFO: Number of running nodes: 2, number of available pods: 2 Jun 19 13:35:56.408: INFO: Update the DaemonSet to trigger a rollout Jun 19 13:35:56.415: INFO: Updating DaemonSet daemon-set Jun 19 13:36:00.460: INFO: Roll back the DaemonSet before rollout is complete Jun 19 13:36:00.466: INFO: Updating DaemonSet daemon-set Jun 19 13:36:00.466: INFO: Make sure DaemonSet rollback is complete Jun 19 13:36:00.489: INFO: Wrong image for pod: daemon-set-2kbfw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 19 13:36:00.489: INFO: Pod daemon-set-2kbfw is not available Jun 19 13:36:00.495: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:36:01.499: INFO: Wrong image for pod: daemon-set-2kbfw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 19 13:36:01.499: INFO: Pod daemon-set-2kbfw is not available Jun 19 13:36:01.502: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:36:02.505: INFO: Wrong image for pod: daemon-set-2kbfw. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jun 19 13:36:02.505: INFO: Pod daemon-set-2kbfw is not available Jun 19 13:36:02.602: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:36:03.500: INFO: Pod daemon-set-dw25t is not available Jun 19 13:36:03.509: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1935, will wait for the garbage collector to delete the pods Jun 19 13:36:03.575: INFO: Deleting DaemonSet.extensions daemon-set took: 8.248344ms Jun 19 13:36:03.876: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.275332ms Jun 19 13:36:11.910: INFO: Number of nodes with available pods: 0 Jun 19 13:36:11.910: INFO: Number of running nodes: 0, number of available pods: 0 Jun 19 13:36:11.914: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1935/daemonsets","resourceVersion":"17320062"},"items":null} Jun 19 13:36:11.916: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1935/pods","resourceVersion":"17320062"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:36:11.927: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1935" for this suite. Jun 19 13:36:17.948: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:36:18.042: INFO: namespace daemonsets-1935 deletion completed in 6.110475563s • [SLOW TEST:26.774 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:36:18.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4892.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4892.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 19 13:36:24.182: INFO: DNS probes using dns-4892/dns-test-cbd71a58-3ca0-4a20-a548-1d6e46508452 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:36:24.223: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4892" for this suite. Jun 19 13:36:30.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:36:30.411: INFO: namespace dns-4892 deletion completed in 6.158155348s • [SLOW TEST:12.369 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:36:30.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-9503, will wait for the garbage collector to delete the pods Jun 19 13:36:36.580: INFO: Deleting Job.batch foo took: 7.16548ms Jun 19 13:36:36.880: INFO: Terminating Job.batch foo pods took: 300.398098ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:37:12.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-9503" for this suite. Jun 19 13:37:18.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:37:18.281: INFO: namespace job-9503 deletion completed in 6.093361723s • [SLOW TEST:47.870 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:37:18.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-d9ab8d46-8f9d-4ccf-a14f-9391fae48abe STEP: Creating a pod to test consume secrets Jun 19 13:37:18.388: INFO: Waiting up to 5m0s for pod "pod-secrets-13ff72b8-5edb-494a-b43d-132a37fc0a46" in namespace "secrets-603" to be "success or failure" Jun 19 13:37:18.422: INFO: Pod "pod-secrets-13ff72b8-5edb-494a-b43d-132a37fc0a46": Phase="Pending", Reason="", readiness=false. Elapsed: 34.476889ms Jun 19 13:37:20.559: INFO: Pod "pod-secrets-13ff72b8-5edb-494a-b43d-132a37fc0a46": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170854016s Jun 19 13:37:22.562: INFO: Pod "pod-secrets-13ff72b8-5edb-494a-b43d-132a37fc0a46": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.174466622s STEP: Saw pod success Jun 19 13:37:22.562: INFO: Pod "pod-secrets-13ff72b8-5edb-494a-b43d-132a37fc0a46" satisfied condition "success or failure" Jun 19 13:37:22.564: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-13ff72b8-5edb-494a-b43d-132a37fc0a46 container secret-volume-test: STEP: delete the pod Jun 19 13:37:22.582: INFO: Waiting for pod pod-secrets-13ff72b8-5edb-494a-b43d-132a37fc0a46 to disappear Jun 19 13:37:22.610: INFO: Pod pod-secrets-13ff72b8-5edb-494a-b43d-132a37fc0a46 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:37:22.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-603" for this suite. Jun 19 13:37:28.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:37:28.710: INFO: namespace secrets-603 deletion completed in 6.096023816s • [SLOW TEST:10.428 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:37:28.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Jun 19 13:37:29.291: INFO: created pod pod-service-account-defaultsa Jun 19 13:37:29.291: INFO: pod pod-service-account-defaultsa service account token volume mount: true Jun 19 13:37:29.325: INFO: created pod pod-service-account-mountsa Jun 19 13:37:29.325: INFO: pod pod-service-account-mountsa service account token volume mount: true Jun 19 13:37:29.351: INFO: created pod pod-service-account-nomountsa Jun 19 13:37:29.351: INFO: pod pod-service-account-nomountsa service account token volume mount: false Jun 19 13:37:29.365: INFO: created pod pod-service-account-defaultsa-mountspec Jun 19 13:37:29.365: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Jun 19 13:37:29.424: INFO: created pod pod-service-account-mountsa-mountspec Jun 19 13:37:29.424: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Jun 19 13:37:29.457: INFO: created pod pod-service-account-nomountsa-mountspec Jun 19 13:37:29.457: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Jun 19 13:37:29.483: INFO: created pod pod-service-account-defaultsa-nomountspec Jun 19 13:37:29.483: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Jun 19 13:37:29.523: INFO: created pod pod-service-account-mountsa-nomountspec Jun 19 13:37:29.523: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Jun 19 13:37:29.612: INFO: created pod pod-service-account-nomountsa-nomountspec Jun 19 13:37:29.612: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:37:29.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8523" for this suite. Jun 19 13:37:55.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:37:55.875: INFO: namespace svcaccounts-8523 deletion completed in 26.227269773s • [SLOW TEST:27.164 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:37:55.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Jun 19 13:37:55.981: INFO: namespace kubectl-2965 Jun 19 13:37:55.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2965' Jun 19 13:37:56.243: INFO: stderr: "" Jun 19 13:37:56.243: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Jun 19 13:37:57.248: INFO: Selector matched 1 pods for map[app:redis] Jun 19 13:37:57.248: INFO: Found 0 / 1 Jun 19 13:37:58.248: INFO: Selector matched 1 pods for map[app:redis] Jun 19 13:37:58.248: INFO: Found 0 / 1 Jun 19 13:37:59.248: INFO: Selector matched 1 pods for map[app:redis] Jun 19 13:37:59.248: INFO: Found 0 / 1 Jun 19 13:38:00.247: INFO: Selector matched 1 pods for map[app:redis] Jun 19 13:38:00.247: INFO: Found 1 / 1 Jun 19 13:38:00.247: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 19 13:38:00.250: INFO: Selector matched 1 pods for map[app:redis] Jun 19 13:38:00.250: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 19 13:38:00.250: INFO: wait on redis-master startup in kubectl-2965 Jun 19 13:38:00.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-wxrxn redis-master --namespace=kubectl-2965' Jun 19 13:38:00.357: INFO: stderr: "" Jun 19 13:38:00.357: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 19 Jun 13:37:59.338 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Jun 13:37:59.340 # Server started, Redis version 3.2.12\n1:M 19 Jun 13:37:59.340 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Jun 13:37:59.340 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Jun 19 13:38:00.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-2965' Jun 19 13:38:00.511: INFO: stderr: "" Jun 19 13:38:00.511: INFO: stdout: "service/rm2 exposed\n" Jun 19 13:38:00.518: INFO: Service rm2 in namespace kubectl-2965 found. STEP: exposing service Jun 19 13:38:02.526: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-2965' Jun 19 13:38:02.664: INFO: stderr: "" Jun 19 13:38:02.664: INFO: stdout: "service/rm3 exposed\n" Jun 19 13:38:02.672: INFO: Service rm3 in namespace kubectl-2965 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:38:04.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2965" for this suite. Jun 19 13:38:36.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:38:36.780: INFO: namespace kubectl-2965 deletion completed in 32.094953622s • [SLOW TEST:40.905 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:38:36.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Jun 19 13:38:36.843: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix955089796/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:38:36.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9171" for this suite. Jun 19 13:38:42.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:38:43.041: INFO: namespace kubectl-9171 deletion completed in 6.110552038s • [SLOW TEST:6.262 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:38:43.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 19 13:38:43.278: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:38:43.302: INFO: Number of nodes with available pods: 0 Jun 19 13:38:43.302: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:38:44.308: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:38:44.312: INFO: Number of nodes with available pods: 0 Jun 19 13:38:44.312: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:38:45.307: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:38:45.310: INFO: Number of nodes with available pods: 0 Jun 19 13:38:45.310: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:38:46.351: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:38:46.354: INFO: Number of nodes with available pods: 0 Jun 19 13:38:46.354: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:38:47.307: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:38:47.311: INFO: Number of nodes with available pods: 2 Jun 19 13:38:47.311: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jun 19 13:38:47.339: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:38:47.392: INFO: Number of nodes with available pods: 1 Jun 19 13:38:47.392: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:38:48.435: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:38:48.438: INFO: Number of nodes with available pods: 1 Jun 19 13:38:48.438: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:38:49.399: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:38:49.402: INFO: Number of nodes with available pods: 1 Jun 19 13:38:49.402: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:38:50.398: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:38:50.402: INFO: Number of nodes with available pods: 1 Jun 19 13:38:50.402: INFO: Node iruya-worker is running more than one daemon pod Jun 19 13:38:51.397: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 13:38:51.401: INFO: Number of nodes with available pods: 2 Jun 19 13:38:51.401: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5247, will wait for the garbage collector to delete the pods Jun 19 13:38:51.467: INFO: Deleting DaemonSet.extensions daemon-set took: 7.005207ms Jun 19 13:38:51.767: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.26245ms Jun 19 13:39:02.271: INFO: Number of nodes with available pods: 0 Jun 19 13:39:02.271: INFO: Number of running nodes: 0, number of available pods: 0 Jun 19 13:39:02.274: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5247/daemonsets","resourceVersion":"17320746"},"items":null} Jun 19 13:39:02.276: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5247/pods","resourceVersion":"17320746"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:39:02.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5247" for this suite. Jun 19 13:39:08.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:39:08.386: INFO: namespace daemonsets-5247 deletion completed in 6.099903937s • [SLOW TEST:25.345 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:39:08.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jun 19 13:39:08.502: INFO: Waiting up to 5m0s for pod "pod-b568cf5a-a1a9-4de6-9d01-35869d6e8aa4" in namespace "emptydir-488" to be "success or failure" Jun 19 13:39:08.512: INFO: Pod "pod-b568cf5a-a1a9-4de6-9d01-35869d6e8aa4": Phase="Pending", Reason="", readiness=false. Elapsed: 10.262327ms Jun 19 13:39:10.602: INFO: Pod "pod-b568cf5a-a1a9-4de6-9d01-35869d6e8aa4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100015394s Jun 19 13:39:12.607: INFO: Pod "pod-b568cf5a-a1a9-4de6-9d01-35869d6e8aa4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.104629819s STEP: Saw pod success Jun 19 13:39:12.607: INFO: Pod "pod-b568cf5a-a1a9-4de6-9d01-35869d6e8aa4" satisfied condition "success or failure" Jun 19 13:39:12.610: INFO: Trying to get logs from node iruya-worker pod pod-b568cf5a-a1a9-4de6-9d01-35869d6e8aa4 container test-container: STEP: delete the pod Jun 19 13:39:12.639: INFO: Waiting for pod pod-b568cf5a-a1a9-4de6-9d01-35869d6e8aa4 to disappear Jun 19 13:39:12.650: INFO: Pod pod-b568cf5a-a1a9-4de6-9d01-35869d6e8aa4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:39:12.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-488" for this suite. Jun 19 13:39:18.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:39:18.749: INFO: namespace emptydir-488 deletion completed in 6.095974582s • [SLOW TEST:10.362 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:39:18.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-9239 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9239 to expose endpoints map[] Jun 19 13:39:18.860: INFO: Get endpoints failed (12.831145ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Jun 19 13:39:19.864: INFO: successfully validated that service multi-endpoint-test in namespace services-9239 exposes endpoints map[] (1.017048678s elapsed) STEP: Creating pod pod1 in namespace services-9239 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9239 to expose endpoints map[pod1:[100]] Jun 19 13:39:23.928: INFO: successfully validated that service multi-endpoint-test in namespace services-9239 exposes endpoints map[pod1:[100]] (4.05760472s elapsed) STEP: Creating pod pod2 in namespace services-9239 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9239 to expose endpoints map[pod1:[100] pod2:[101]] Jun 19 13:39:27.989: INFO: successfully validated that service multi-endpoint-test in namespace services-9239 exposes endpoints map[pod1:[100] pod2:[101]] (4.057013207s elapsed) STEP: Deleting pod pod1 in namespace services-9239 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9239 to expose endpoints map[pod2:[101]] Jun 19 13:39:29.030: INFO: successfully validated that service multi-endpoint-test in namespace services-9239 exposes endpoints map[pod2:[101]] (1.035459088s elapsed) STEP: Deleting pod pod2 in namespace services-9239 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9239 to expose endpoints map[] Jun 19 13:39:30.046: INFO: successfully validated that service multi-endpoint-test in namespace services-9239 exposes endpoints map[] (1.010741985s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:39:30.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9239" for this suite. Jun 19 13:39:52.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:39:52.219: INFO: namespace services-9239 deletion completed in 22.086730332s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:33.470 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:39:52.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 13:40:14.354: INFO: Container started at 2020-06-19 13:39:55 +0000 UTC, pod became ready at 2020-06-19 13:40:13 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:40:14.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8263" for this suite. Jun 19 13:40:36.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:40:36.453: INFO: namespace container-probe-8263 deletion completed in 22.094917446s • [SLOW TEST:44.233 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:40:36.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-d8386aa0-718a-4185-9ad4-d4b12447e0ce STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-d8386aa0-718a-4185-9ad4-d4b12447e0ce STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:40:44.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3580" for this suite. Jun 19 13:41:06.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:41:06.766: INFO: namespace projected-3580 deletion completed in 22.10247488s • [SLOW TEST:30.313 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:41:06.767: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0619 13:41:46.901929 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 19 13:41:46.901: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:41:46.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1616" for this suite. Jun 19 13:41:54.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:41:54.989: INFO: namespace gc-1616 deletion completed in 8.082968128s • [SLOW TEST:48.222 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:41:54.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9762 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 19 13:41:55.493: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 19 13:42:21.856: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.76:8080/dial?request=hostName&protocol=http&host=10.244.1.75&port=8080&tries=1'] Namespace:pod-network-test-9762 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 13:42:21.856: INFO: >>> kubeConfig: /root/.kube/config I0619 13:42:21.898600 6 log.go:172] (0xc000ce8630) (0xc001e93e00) Create stream I0619 13:42:21.898631 6 log.go:172] (0xc000ce8630) (0xc001e93e00) Stream added, broadcasting: 1 I0619 13:42:21.900740 6 log.go:172] (0xc000ce8630) Reply frame received for 1 I0619 13:42:21.900784 6 log.go:172] (0xc000ce8630) (0xc0003055e0) Create stream I0619 13:42:21.900800 6 log.go:172] (0xc000ce8630) (0xc0003055e0) Stream added, broadcasting: 3 I0619 13:42:21.902246 6 log.go:172] (0xc000ce8630) Reply frame received for 3 I0619 13:42:21.902286 6 log.go:172] (0xc000ce8630) (0xc00297c140) Create stream I0619 13:42:21.902300 6 log.go:172] (0xc000ce8630) (0xc00297c140) Stream added, broadcasting: 5 I0619 13:42:21.903261 6 log.go:172] (0xc000ce8630) Reply frame received for 5 I0619 13:42:22.044877 6 log.go:172] (0xc000ce8630) Data frame received for 3 I0619 13:42:22.044916 6 log.go:172] (0xc0003055e0) (3) Data frame handling I0619 13:42:22.044941 6 log.go:172] (0xc0003055e0) (3) Data frame sent I0619 13:42:22.045516 6 log.go:172] (0xc000ce8630) Data frame received for 3 I0619 13:42:22.045544 6 log.go:172] (0xc0003055e0) (3) Data frame handling I0619 13:42:22.045678 6 log.go:172] (0xc000ce8630) Data frame received for 5 I0619 13:42:22.045697 6 log.go:172] (0xc00297c140) (5) Data frame handling I0619 13:42:22.047387 6 log.go:172] (0xc000ce8630) Data frame received for 1 I0619 13:42:22.047408 6 log.go:172] (0xc001e93e00) (1) Data frame handling I0619 13:42:22.047432 6 log.go:172] (0xc001e93e00) (1) Data frame sent I0619 13:42:22.047516 6 log.go:172] (0xc000ce8630) (0xc001e93e00) Stream removed, broadcasting: 1 I0619 13:42:22.047561 6 log.go:172] (0xc000ce8630) Go away received I0619 13:42:22.047640 6 log.go:172] (0xc000ce8630) (0xc001e93e00) Stream removed, broadcasting: 1 I0619 13:42:22.047690 6 log.go:172] (0xc000ce8630) (0xc0003055e0) Stream removed, broadcasting: 3 I0619 13:42:22.047723 6 log.go:172] (0xc000ce8630) (0xc00297c140) Stream removed, broadcasting: 5 Jun 19 13:42:22.047: INFO: Waiting for endpoints: map[] Jun 19 13:42:22.051: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.76:8080/dial?request=hostName&protocol=http&host=10.244.2.97&port=8080&tries=1'] Namespace:pod-network-test-9762 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 13:42:22.051: INFO: >>> kubeConfig: /root/.kube/config I0619 13:42:22.078719 6 log.go:172] (0xc000cfe8f0) (0xc000305cc0) Create stream I0619 13:42:22.078746 6 log.go:172] (0xc000cfe8f0) (0xc000305cc0) Stream added, broadcasting: 1 I0619 13:42:22.080887 6 log.go:172] (0xc000cfe8f0) Reply frame received for 1 I0619 13:42:22.080928 6 log.go:172] (0xc000cfe8f0) (0xc000305e00) Create stream I0619 13:42:22.080943 6 log.go:172] (0xc000cfe8f0) (0xc000305e00) Stream added, broadcasting: 3 I0619 13:42:22.082219 6 log.go:172] (0xc000cfe8f0) Reply frame received for 3 I0619 13:42:22.082277 6 log.go:172] (0xc000cfe8f0) (0xc001e93ea0) Create stream I0619 13:42:22.082292 6 log.go:172] (0xc000cfe8f0) (0xc001e93ea0) Stream added, broadcasting: 5 I0619 13:42:22.083306 6 log.go:172] (0xc000cfe8f0) Reply frame received for 5 I0619 13:42:22.167964 6 log.go:172] (0xc000cfe8f0) Data frame received for 3 I0619 13:42:22.167986 6 log.go:172] (0xc000305e00) (3) Data frame handling I0619 13:42:22.168001 6 log.go:172] (0xc000305e00) (3) Data frame sent I0619 13:42:22.168261 6 log.go:172] (0xc000cfe8f0) Data frame received for 3 I0619 13:42:22.168283 6 log.go:172] (0xc000305e00) (3) Data frame handling I0619 13:42:22.168357 6 log.go:172] (0xc000cfe8f0) Data frame received for 5 I0619 13:42:22.168368 6 log.go:172] (0xc001e93ea0) (5) Data frame handling I0619 13:42:22.169913 6 log.go:172] (0xc000cfe8f0) Data frame received for 1 I0619 13:42:22.169941 6 log.go:172] (0xc000305cc0) (1) Data frame handling I0619 13:42:22.169957 6 log.go:172] (0xc000305cc0) (1) Data frame sent I0619 13:42:22.169968 6 log.go:172] (0xc000cfe8f0) (0xc000305cc0) Stream removed, broadcasting: 1 I0619 13:42:22.169982 6 log.go:172] (0xc000cfe8f0) Go away received I0619 13:42:22.170066 6 log.go:172] (0xc000cfe8f0) (0xc000305cc0) Stream removed, broadcasting: 1 I0619 13:42:22.170088 6 log.go:172] (0xc000cfe8f0) (0xc000305e00) Stream removed, broadcasting: 3 I0619 13:42:22.170100 6 log.go:172] (0xc000cfe8f0) (0xc001e93ea0) Stream removed, broadcasting: 5 Jun 19 13:42:22.170: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:42:22.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9762" for this suite. Jun 19 13:42:44.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:42:44.267: INFO: namespace pod-network-test-9762 deletion completed in 22.093381436s • [SLOW TEST:49.277 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:42:44.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 13:42:44.374: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Jun 19 13:42:44.479: INFO: Pod name sample-pod: Found 0 pods out of 1 Jun 19 13:42:49.484: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 19 13:42:49.484: INFO: Creating deployment "test-rolling-update-deployment" Jun 19 13:42:49.488: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Jun 19 13:42:49.509: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Jun 19 13:42:51.546: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Jun 19 13:42:51.549: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728170969, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728170969, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728170969, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728170969, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 19 13:42:53.553: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 19 13:42:53.564: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-5198,SelfLink:/apis/apps/v1/namespaces/deployment-5198/deployments/test-rolling-update-deployment,UID:f3779190-09a4-4ee4-9e44-606acc900d1b,ResourceVersion:17321646,Generation:1,CreationTimestamp:2020-06-19 13:42:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-19 13:42:49 +0000 UTC 2020-06-19 13:42:49 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-19 13:42:52 +0000 UTC 2020-06-19 13:42:49 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 19 13:42:53.568: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-5198,SelfLink:/apis/apps/v1/namespaces/deployment-5198/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:6d45d441-2805-4ca1-a396-4b646a476dd9,ResourceVersion:17321635,Generation:1,CreationTimestamp:2020-06-19 13:42:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment f3779190-09a4-4ee4-9e44-606acc900d1b 0xc00247d4d7 0xc00247d4d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 19 13:42:53.568: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Jun 19 13:42:53.568: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-5198,SelfLink:/apis/apps/v1/namespaces/deployment-5198/replicasets/test-rolling-update-controller,UID:411dae66-30a0-4c07-ae2f-93ea7125f173,ResourceVersion:17321644,Generation:2,CreationTimestamp:2020-06-19 13:42:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment f3779190-09a4-4ee4-9e44-606acc900d1b 0xc00247d3f7 0xc00247d3f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 19 13:42:53.571: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-qfl9c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-qfl9c,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-5198,SelfLink:/api/v1/namespaces/deployment-5198/pods/test-rolling-update-deployment-79f6b9d75c-qfl9c,UID:1dee32f8-0489-4973-8d53-734396551cd5,ResourceVersion:17321634,Generation:0,CreationTimestamp:2020-06-19 13:42:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 6d45d441-2805-4ca1-a396-4b646a476dd9 0xc002d1c9b7 0xc002d1c9b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-v7lss {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-v7lss,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-v7lss true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d1ca30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d1ca50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:42:49 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:42:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:42:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:42:49 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.78,StartTime:2020-06-19 13:42:49 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-19 13:42:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://479ea35ee8e83d1576261ad2c92d5b23958cbaba5b4a6e02f3a39ec2c9b91c6b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:42:53.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5198" for this suite. Jun 19 13:42:59.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:42:59.674: INFO: namespace deployment-5198 deletion completed in 6.098371871s • [SLOW TEST:15.406 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:42:59.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 19 13:42:59.767: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:43:06.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2232" for this suite. Jun 19 13:43:12.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:43:13.004: INFO: namespace init-container-2232 deletion completed in 6.10483336s • [SLOW TEST:13.331 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:43:13.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-449c328d-6ad2-4df7-8e57-36d7556d0cb5 in namespace container-probe-67 Jun 19 13:43:17.104: INFO: Started pod busybox-449c328d-6ad2-4df7-8e57-36d7556d0cb5 in namespace container-probe-67 STEP: checking the pod's current state and verifying that restartCount is present Jun 19 13:43:17.107: INFO: Initial restart count of pod busybox-449c328d-6ad2-4df7-8e57-36d7556d0cb5 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:47:18.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-67" for this suite. Jun 19 13:47:24.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:47:24.684: INFO: namespace container-probe-67 deletion completed in 6.094008306s • [SLOW TEST:251.679 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:47:24.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Jun 19 13:47:24.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7895' Jun 19 13:47:27.658: INFO: stderr: "" Jun 19 13:47:27.658: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 19 13:47:27.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7895' Jun 19 13:47:27.756: INFO: stderr: "" Jun 19 13:47:27.757: INFO: stdout: "update-demo-nautilus-4mwwl update-demo-nautilus-hslvh " Jun 19 13:47:27.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4mwwl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7895' Jun 19 13:47:27.852: INFO: stderr: "" Jun 19 13:47:27.852: INFO: stdout: "" Jun 19 13:47:27.852: INFO: update-demo-nautilus-4mwwl is created but not running Jun 19 13:47:32.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7895' Jun 19 13:47:32.960: INFO: stderr: "" Jun 19 13:47:32.960: INFO: stdout: "update-demo-nautilus-4mwwl update-demo-nautilus-hslvh " Jun 19 13:47:32.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4mwwl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7895' Jun 19 13:47:33.059: INFO: stderr: "" Jun 19 13:47:33.059: INFO: stdout: "true" Jun 19 13:47:33.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4mwwl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7895' Jun 19 13:47:33.159: INFO: stderr: "" Jun 19 13:47:33.159: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 19 13:47:33.159: INFO: validating pod update-demo-nautilus-4mwwl Jun 19 13:47:33.171: INFO: got data: { "image": "nautilus.jpg" } Jun 19 13:47:33.171: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 19 13:47:33.171: INFO: update-demo-nautilus-4mwwl is verified up and running Jun 19 13:47:33.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hslvh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7895' Jun 19 13:47:33.265: INFO: stderr: "" Jun 19 13:47:33.265: INFO: stdout: "true" Jun 19 13:47:33.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hslvh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7895' Jun 19 13:47:33.353: INFO: stderr: "" Jun 19 13:47:33.353: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 19 13:47:33.353: INFO: validating pod update-demo-nautilus-hslvh Jun 19 13:47:33.365: INFO: got data: { "image": "nautilus.jpg" } Jun 19 13:47:33.365: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 19 13:47:33.365: INFO: update-demo-nautilus-hslvh is verified up and running STEP: rolling-update to new replication controller Jun 19 13:47:33.366: INFO: scanned /root for discovery docs: Jun 19 13:47:33.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7895' Jun 19 13:47:56.100: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jun 19 13:47:56.100: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 19 13:47:56.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7895' Jun 19 13:47:56.194: INFO: stderr: "" Jun 19 13:47:56.194: INFO: stdout: "update-demo-kitten-2gs9r update-demo-kitten-9l5tz " Jun 19 13:47:56.194: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2gs9r -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7895' Jun 19 13:47:56.293: INFO: stderr: "" Jun 19 13:47:56.293: INFO: stdout: "true" Jun 19 13:47:56.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2gs9r -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7895' Jun 19 13:47:56.383: INFO: stderr: "" Jun 19 13:47:56.384: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 19 13:47:56.384: INFO: validating pod update-demo-kitten-2gs9r Jun 19 13:47:56.391: INFO: got data: { "image": "kitten.jpg" } Jun 19 13:47:56.391: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 19 13:47:56.391: INFO: update-demo-kitten-2gs9r is verified up and running Jun 19 13:47:56.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9l5tz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7895' Jun 19 13:47:56.484: INFO: stderr: "" Jun 19 13:47:56.484: INFO: stdout: "true" Jun 19 13:47:56.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-9l5tz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7895' Jun 19 13:47:56.583: INFO: stderr: "" Jun 19 13:47:56.583: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jun 19 13:47:56.583: INFO: validating pod update-demo-kitten-9l5tz Jun 19 13:47:56.594: INFO: got data: { "image": "kitten.jpg" } Jun 19 13:47:56.594: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jun 19 13:47:56.594: INFO: update-demo-kitten-9l5tz is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:47:56.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7895" for this suite. Jun 19 13:48:20.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:48:20.719: INFO: namespace kubectl-7895 deletion completed in 24.121434407s • [SLOW TEST:56.036 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:48:20.720: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-1d92dcf8-1a02-4f96-a44e-ab844a49ac7c STEP: Creating a pod to test consume configMaps Jun 19 13:48:20.868: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ca45c2c6-829b-4721-9570-b40a740b3d07" in namespace "projected-9545" to be "success or failure" Jun 19 13:48:20.879: INFO: Pod "pod-projected-configmaps-ca45c2c6-829b-4721-9570-b40a740b3d07": Phase="Pending", Reason="", readiness=false. Elapsed: 10.958058ms Jun 19 13:48:22.883: INFO: Pod "pod-projected-configmaps-ca45c2c6-829b-4721-9570-b40a740b3d07": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015255955s Jun 19 13:48:24.887: INFO: Pod "pod-projected-configmaps-ca45c2c6-829b-4721-9570-b40a740b3d07": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019273317s STEP: Saw pod success Jun 19 13:48:24.887: INFO: Pod "pod-projected-configmaps-ca45c2c6-829b-4721-9570-b40a740b3d07" satisfied condition "success or failure" Jun 19 13:48:24.889: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-ca45c2c6-829b-4721-9570-b40a740b3d07 container projected-configmap-volume-test: STEP: delete the pod Jun 19 13:48:24.966: INFO: Waiting for pod pod-projected-configmaps-ca45c2c6-829b-4721-9570-b40a740b3d07 to disappear Jun 19 13:48:24.977: INFO: Pod pod-projected-configmaps-ca45c2c6-829b-4721-9570-b40a740b3d07 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:48:24.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9545" for this suite. Jun 19 13:48:30.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:48:31.064: INFO: namespace projected-9545 deletion completed in 6.083509148s • [SLOW TEST:10.344 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:48:31.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 19 13:48:31.132: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 19 13:48:31.164: INFO: Waiting for terminating namespaces to be deleted... Jun 19 13:48:31.197: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 19 13:48:31.202: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 19 13:48:31.202: INFO: Container kube-proxy ready: true, restart count 0 Jun 19 13:48:31.202: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 19 13:48:31.202: INFO: Container kindnet-cni ready: true, restart count 2 Jun 19 13:48:31.202: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 19 13:48:31.209: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 19 13:48:31.209: INFO: Container coredns ready: true, restart count 0 Jun 19 13:48:31.209: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 19 13:48:31.209: INFO: Container coredns ready: true, restart count 0 Jun 19 13:48:31.209: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 19 13:48:31.209: INFO: Container kindnet-cni ready: true, restart count 2 Jun 19 13:48:31.209: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 19 13:48:31.209: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1619f61483c84cf4], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:48:32.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1239" for this suite. Jun 19 13:48:38.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:48:38.329: INFO: namespace sched-pred-1239 deletion completed in 6.097535303s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.265 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:48:38.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-4529abd5-a0e3-4169-9a9d-0573850dd020 STEP: Creating a pod to test consume secrets Jun 19 13:48:38.405: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c9a2dd62-f8a4-479a-aadc-1746f1ce6ac4" in namespace "projected-4240" to be "success or failure" Jun 19 13:48:38.430: INFO: Pod "pod-projected-secrets-c9a2dd62-f8a4-479a-aadc-1746f1ce6ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.949386ms Jun 19 13:48:40.435: INFO: Pod "pod-projected-secrets-c9a2dd62-f8a4-479a-aadc-1746f1ce6ac4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029568337s Jun 19 13:48:42.439: INFO: Pod "pod-projected-secrets-c9a2dd62-f8a4-479a-aadc-1746f1ce6ac4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033798841s STEP: Saw pod success Jun 19 13:48:42.439: INFO: Pod "pod-projected-secrets-c9a2dd62-f8a4-479a-aadc-1746f1ce6ac4" satisfied condition "success or failure" Jun 19 13:48:42.443: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-c9a2dd62-f8a4-479a-aadc-1746f1ce6ac4 container projected-secret-volume-test: STEP: delete the pod Jun 19 13:48:42.475: INFO: Waiting for pod pod-projected-secrets-c9a2dd62-f8a4-479a-aadc-1746f1ce6ac4 to disappear Jun 19 13:48:42.487: INFO: Pod pod-projected-secrets-c9a2dd62-f8a4-479a-aadc-1746f1ce6ac4 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:48:42.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4240" for this suite. Jun 19 13:48:48.503: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:48:48.585: INFO: namespace projected-4240 deletion completed in 6.09450305s • [SLOW TEST:10.255 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:48:48.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 19 13:48:48.620: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 19 13:48:48.629: INFO: Waiting for terminating namespaces to be deleted... Jun 19 13:48:48.631: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 19 13:48:48.636: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 19 13:48:48.636: INFO: Container kube-proxy ready: true, restart count 0 Jun 19 13:48:48.636: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 19 13:48:48.636: INFO: Container kindnet-cni ready: true, restart count 2 Jun 19 13:48:48.636: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 19 13:48:48.641: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 19 13:48:48.641: INFO: Container kube-proxy ready: true, restart count 0 Jun 19 13:48:48.641: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 19 13:48:48.641: INFO: Container kindnet-cni ready: true, restart count 2 Jun 19 13:48:48.641: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 19 13:48:48.641: INFO: Container coredns ready: true, restart count 0 Jun 19 13:48:48.641: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 19 13:48:48.641: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Jun 19 13:48:48.735: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Jun 19 13:48:48.735: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Jun 19 13:48:48.735: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Jun 19 13:48:48.735: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Jun 19 13:48:48.735: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Jun 19 13:48:48.735: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-038daccb-a821-4ad9-8463-7a638a25e2e1.1619f618985749c9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7766/filler-pod-038daccb-a821-4ad9-8463-7a638a25e2e1 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-038daccb-a821-4ad9-8463-7a638a25e2e1.1619f618e5e4cbdf], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-038daccb-a821-4ad9-8463-7a638a25e2e1.1619f61937e61be1], Reason = [Created], Message = [Created container filler-pod-038daccb-a821-4ad9-8463-7a638a25e2e1] STEP: Considering event: Type = [Normal], Name = [filler-pod-038daccb-a821-4ad9-8463-7a638a25e2e1.1619f61949bf7983], Reason = [Started], Message = [Started container filler-pod-038daccb-a821-4ad9-8463-7a638a25e2e1] STEP: Considering event: Type = [Normal], Name = [filler-pod-9af19af1-4a43-4146-bb99-0f6e9dd81863.1619f6189ad69992], Reason = [Scheduled], Message = [Successfully assigned sched-pred-7766/filler-pod-9af19af1-4a43-4146-bb99-0f6e9dd81863 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-9af19af1-4a43-4146-bb99-0f6e9dd81863.1619f61920497ab7], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-9af19af1-4a43-4146-bb99-0f6e9dd81863.1619f6195b8c3377], Reason = [Created], Message = [Created container filler-pod-9af19af1-4a43-4146-bb99-0f6e9dd81863] STEP: Considering event: Type = [Normal], Name = [filler-pod-9af19af1-4a43-4146-bb99-0f6e9dd81863.1619f6196b04dfc3], Reason = [Started], Message = [Started container filler-pod-9af19af1-4a43-4146-bb99-0f6e9dd81863] STEP: Considering event: Type = [Warning], Name = [additional-pod.1619f6198a591f71], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:48:53.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7766" for this suite. Jun 19 13:48:59.907: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:48:59.978: INFO: namespace sched-pred-7766 deletion completed in 6.088899959s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.393 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:48:59.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Jun 19 13:49:00.234: INFO: Waiting up to 5m0s for pod "pod-118e4250-b60a-4d47-973a-1f1310dc35f1" in namespace "emptydir-3068" to be "success or failure" Jun 19 13:49:00.236: INFO: Pod "pod-118e4250-b60a-4d47-973a-1f1310dc35f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.284731ms Jun 19 13:49:02.240: INFO: Pod "pod-118e4250-b60a-4d47-973a-1f1310dc35f1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005986102s Jun 19 13:49:04.243: INFO: Pod "pod-118e4250-b60a-4d47-973a-1f1310dc35f1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009863044s STEP: Saw pod success Jun 19 13:49:04.243: INFO: Pod "pod-118e4250-b60a-4d47-973a-1f1310dc35f1" satisfied condition "success or failure" Jun 19 13:49:04.246: INFO: Trying to get logs from node iruya-worker pod pod-118e4250-b60a-4d47-973a-1f1310dc35f1 container test-container: STEP: delete the pod Jun 19 13:49:04.262: INFO: Waiting for pod pod-118e4250-b60a-4d47-973a-1f1310dc35f1 to disappear Jun 19 13:49:04.266: INFO: Pod pod-118e4250-b60a-4d47-973a-1f1310dc35f1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:49:04.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3068" for this suite. Jun 19 13:49:10.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:49:10.354: INFO: namespace emptydir-3068 deletion completed in 6.086207268s • [SLOW TEST:10.376 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:49:10.355: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-b48ddd7f-f65c-4528-9872-cd14b688e254 STEP: Creating a pod to test consume secrets Jun 19 13:49:10.411: INFO: Waiting up to 5m0s for pod "pod-secrets-d31ee835-346d-454c-89d6-bf7cecc3994d" in namespace "secrets-5277" to be "success or failure" Jun 19 13:49:10.423: INFO: Pod "pod-secrets-d31ee835-346d-454c-89d6-bf7cecc3994d": Phase="Pending", Reason="", readiness=false. Elapsed: 11.909105ms Jun 19 13:49:12.426: INFO: Pod "pod-secrets-d31ee835-346d-454c-89d6-bf7cecc3994d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015611612s Jun 19 13:49:14.431: INFO: Pod "pod-secrets-d31ee835-346d-454c-89d6-bf7cecc3994d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019881033s STEP: Saw pod success Jun 19 13:49:14.431: INFO: Pod "pod-secrets-d31ee835-346d-454c-89d6-bf7cecc3994d" satisfied condition "success or failure" Jun 19 13:49:14.434: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-d31ee835-346d-454c-89d6-bf7cecc3994d container secret-volume-test: STEP: delete the pod Jun 19 13:49:14.473: INFO: Waiting for pod pod-secrets-d31ee835-346d-454c-89d6-bf7cecc3994d to disappear Jun 19 13:49:14.484: INFO: Pod pod-secrets-d31ee835-346d-454c-89d6-bf7cecc3994d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:49:14.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5277" for this suite. Jun 19 13:49:20.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:49:20.614: INFO: namespace secrets-5277 deletion completed in 6.125919271s • [SLOW TEST:10.260 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:49:20.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 13:49:20.701: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.170987ms) Jun 19 13:49:20.705: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.684308ms) Jun 19 13:49:20.709: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.957386ms) Jun 19 13:49:20.712: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.436876ms) Jun 19 13:49:20.716: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.729906ms) Jun 19 13:49:20.720: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.9291ms) Jun 19 13:49:20.724: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.790537ms) Jun 19 13:49:20.728: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.691909ms) Jun 19 13:49:20.731: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.435497ms) Jun 19 13:49:20.735: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.364495ms) Jun 19 13:49:20.738: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.555279ms) Jun 19 13:49:20.742: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.315992ms) Jun 19 13:49:20.745: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.291121ms) Jun 19 13:49:20.748: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.968185ms) Jun 19 13:49:20.750: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.533443ms) Jun 19 13:49:20.753: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.544558ms) Jun 19 13:49:20.756: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.633919ms) Jun 19 13:49:20.758: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.686884ms) Jun 19 13:49:20.761: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.523539ms) Jun 19 13:49:20.764: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.583472ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:49:20.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-461" for this suite. Jun 19 13:49:26.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:49:26.847: INFO: namespace proxy-461 deletion completed in 6.080135108s • [SLOW TEST:6.232 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:49:26.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 13:49:26.915: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.358261ms) Jun 19 13:49:26.919: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.622754ms) Jun 19 13:49:26.923: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.877326ms) Jun 19 13:49:26.947: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 24.066882ms) Jun 19 13:49:26.950: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.967461ms) Jun 19 13:49:26.954: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.380256ms) Jun 19 13:49:26.958: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.944661ms) Jun 19 13:49:26.962: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.980281ms) Jun 19 13:49:26.965: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.500812ms) Jun 19 13:49:26.970: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.534945ms) Jun 19 13:49:26.973: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.469224ms) Jun 19 13:49:26.976: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.653235ms) Jun 19 13:49:26.979: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.265787ms) Jun 19 13:49:26.982: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.556817ms) Jun 19 13:49:26.984: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.4056ms) Jun 19 13:49:26.987: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.481781ms) Jun 19 13:49:26.989: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.788177ms) Jun 19 13:49:26.992: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.782538ms) Jun 19 13:49:26.995: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.910454ms) Jun 19 13:49:26.999: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.36813ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:49:26.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-2663" for this suite. Jun 19 13:49:33.020: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:49:33.091: INFO: namespace proxy-2663 deletion completed in 6.088346217s • [SLOW TEST:6.244 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:49:33.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 13:49:33.178: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Jun 19 13:49:33.350: INFO: stderr: "" Jun 19 13:49:33.350: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-06-08T12:08:14Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:49:33.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6190" for this suite. Jun 19 13:49:39.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:49:39.438: INFO: namespace kubectl-6190 deletion completed in 6.083168529s • [SLOW TEST:6.348 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:49:39.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Jun 19 13:49:39.500: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5402,SelfLink:/api/v1/namespaces/watch-5402/configmaps/e2e-watch-test-watch-closed,UID:3c76bf7b-4272-4854-b26d-431ffd0cd7f6,ResourceVersion:17322841,Generation:0,CreationTimestamp:2020-06-19 13:49:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 19 13:49:39.500: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5402,SelfLink:/api/v1/namespaces/watch-5402/configmaps/e2e-watch-test-watch-closed,UID:3c76bf7b-4272-4854-b26d-431ffd0cd7f6,ResourceVersion:17322842,Generation:0,CreationTimestamp:2020-06-19 13:49:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Jun 19 13:49:39.542: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5402,SelfLink:/api/v1/namespaces/watch-5402/configmaps/e2e-watch-test-watch-closed,UID:3c76bf7b-4272-4854-b26d-431ffd0cd7f6,ResourceVersion:17322843,Generation:0,CreationTimestamp:2020-06-19 13:49:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 19 13:49:39.542: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5402,SelfLink:/api/v1/namespaces/watch-5402/configmaps/e2e-watch-test-watch-closed,UID:3c76bf7b-4272-4854-b26d-431ffd0cd7f6,ResourceVersion:17322844,Generation:0,CreationTimestamp:2020-06-19 13:49:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:49:39.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5402" for this suite. Jun 19 13:49:45.566: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:49:45.670: INFO: namespace watch-5402 deletion completed in 6.118177859s • [SLOW TEST:6.232 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:49:45.671: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 19 13:49:45.717: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:49:53.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3615" for this suite. Jun 19 13:50:15.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:50:15.704: INFO: namespace init-container-3615 deletion completed in 22.094394912s • [SLOW TEST:30.034 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:50:15.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-c75ee50b-94ea-44ff-abfe-8baaae00da5f STEP: Creating secret with name s-test-opt-upd-505ca319-066d-4a56-9f40-52719583a52d STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c75ee50b-94ea-44ff-abfe-8baaae00da5f STEP: Updating secret s-test-opt-upd-505ca319-066d-4a56-9f40-52719583a52d STEP: Creating secret with name s-test-opt-create-6487a3c2-f9e2-4da5-9723-f72d9b693bf6 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:50:25.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9756" for this suite. Jun 19 13:50:47.963: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:50:48.057: INFO: namespace projected-9756 deletion completed in 22.112268198s • [SLOW TEST:32.353 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:50:48.058: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 19 13:50:48.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-932' Jun 19 13:50:48.246: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 19 13:50:48.246: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Jun 19 13:50:52.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-932' Jun 19 13:50:52.433: INFO: stderr: "" Jun 19 13:50:52.433: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:50:52.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-932" for this suite. Jun 19 13:51:14.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:51:14.547: INFO: namespace kubectl-932 deletion completed in 22.110071213s • [SLOW TEST:26.489 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:51:14.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 13:51:14.616: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dad93b70-f565-49f5-b49e-100f1bec6494" in namespace "downward-api-5063" to be "success or failure" Jun 19 13:51:14.619: INFO: Pod "downwardapi-volume-dad93b70-f565-49f5-b49e-100f1bec6494": Phase="Pending", Reason="", readiness=false. Elapsed: 3.374039ms Jun 19 13:51:16.652: INFO: Pod "downwardapi-volume-dad93b70-f565-49f5-b49e-100f1bec6494": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036521866s Jun 19 13:51:18.658: INFO: Pod "downwardapi-volume-dad93b70-f565-49f5-b49e-100f1bec6494": Phase="Running", Reason="", readiness=true. Elapsed: 4.041680436s Jun 19 13:51:20.662: INFO: Pod "downwardapi-volume-dad93b70-f565-49f5-b49e-100f1bec6494": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045876964s STEP: Saw pod success Jun 19 13:51:20.662: INFO: Pod "downwardapi-volume-dad93b70-f565-49f5-b49e-100f1bec6494" satisfied condition "success or failure" Jun 19 13:51:20.664: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-dad93b70-f565-49f5-b49e-100f1bec6494 container client-container: STEP: delete the pod Jun 19 13:51:20.697: INFO: Waiting for pod downwardapi-volume-dad93b70-f565-49f5-b49e-100f1bec6494 to disappear Jun 19 13:51:20.706: INFO: Pod downwardapi-volume-dad93b70-f565-49f5-b49e-100f1bec6494 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:51:20.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5063" for this suite. Jun 19 13:51:26.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:51:26.866: INFO: namespace downward-api-5063 deletion completed in 6.157329252s • [SLOW TEST:12.319 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:51:26.866: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 13:51:26.906: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b3cec957-598f-4a20-a0f6-7a5cd98a2840" in namespace "downward-api-2976" to be "success or failure" Jun 19 13:51:26.923: INFO: Pod "downwardapi-volume-b3cec957-598f-4a20-a0f6-7a5cd98a2840": Phase="Pending", Reason="", readiness=false. Elapsed: 17.303654ms Jun 19 13:51:28.970: INFO: Pod "downwardapi-volume-b3cec957-598f-4a20-a0f6-7a5cd98a2840": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06454311s Jun 19 13:51:30.974: INFO: Pod "downwardapi-volume-b3cec957-598f-4a20-a0f6-7a5cd98a2840": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068622622s STEP: Saw pod success Jun 19 13:51:30.974: INFO: Pod "downwardapi-volume-b3cec957-598f-4a20-a0f6-7a5cd98a2840" satisfied condition "success or failure" Jun 19 13:51:30.977: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b3cec957-598f-4a20-a0f6-7a5cd98a2840 container client-container: STEP: delete the pod Jun 19 13:51:30.995: INFO: Waiting for pod downwardapi-volume-b3cec957-598f-4a20-a0f6-7a5cd98a2840 to disappear Jun 19 13:51:30.999: INFO: Pod downwardapi-volume-b3cec957-598f-4a20-a0f6-7a5cd98a2840 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:51:30.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2976" for this suite. Jun 19 13:51:37.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:51:37.192: INFO: namespace downward-api-2976 deletion completed in 6.170656815s • [SLOW TEST:10.326 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:51:37.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0619 13:51:47.290409 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 19 13:51:47.290: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:51:47.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8963" for this suite. Jun 19 13:51:53.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:51:53.413: INFO: namespace gc-8963 deletion completed in 6.119724017s • [SLOW TEST:16.221 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:51:53.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-bea5a44a-f2e6-4ee3-a070-53ea30070cb1 STEP: Creating a pod to test consume secrets Jun 19 13:51:53.536: INFO: Waiting up to 5m0s for pod "pod-secrets-820ab643-7301-43e3-885d-f0c873633281" in namespace "secrets-362" to be "success or failure" Jun 19 13:51:53.559: INFO: Pod "pod-secrets-820ab643-7301-43e3-885d-f0c873633281": Phase="Pending", Reason="", readiness=false. Elapsed: 22.975018ms Jun 19 13:51:55.564: INFO: Pod "pod-secrets-820ab643-7301-43e3-885d-f0c873633281": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027918127s Jun 19 13:51:57.567: INFO: Pod "pod-secrets-820ab643-7301-43e3-885d-f0c873633281": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.030998791s STEP: Saw pod success Jun 19 13:51:57.567: INFO: Pod "pod-secrets-820ab643-7301-43e3-885d-f0c873633281" satisfied condition "success or failure" Jun 19 13:51:57.569: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-820ab643-7301-43e3-885d-f0c873633281 container secret-volume-test: STEP: delete the pod Jun 19 13:51:57.602: INFO: Waiting for pod pod-secrets-820ab643-7301-43e3-885d-f0c873633281 to disappear Jun 19 13:51:57.631: INFO: Pod pod-secrets-820ab643-7301-43e3-885d-f0c873633281 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:51:57.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-362" for this suite. Jun 19 13:52:03.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:52:03.752: INFO: namespace secrets-362 deletion completed in 6.11821113s STEP: Destroying namespace "secret-namespace-501" for this suite. Jun 19 13:52:09.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:52:09.852: INFO: namespace secret-namespace-501 deletion completed in 6.099155413s • [SLOW TEST:16.438 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:52:09.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 13:52:09.899: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:52:10.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4152" for this suite. Jun 19 13:52:17.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:52:17.088: INFO: namespace custom-resource-definition-4152 deletion completed in 6.098472594s • [SLOW TEST:7.236 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:52:17.088: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 13:52:17.171: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jun 19 13:52:22.176: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 19 13:52:22.176: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jun 19 13:52:24.180: INFO: Creating deployment "test-rollover-deployment" Jun 19 13:52:24.196: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jun 19 13:52:26.202: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jun 19 13:52:26.208: INFO: Ensure that both replica sets have 1 created replica Jun 19 13:52:26.213: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jun 19 13:52:26.219: INFO: Updating deployment test-rollover-deployment Jun 19 13:52:26.219: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jun 19 13:52:28.229: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jun 19 13:52:28.233: INFO: Make sure deployment "test-rollover-deployment" is complete Jun 19 13:52:28.238: INFO: all replica sets need to contain the pod-template-hash label Jun 19 13:52:28.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171546, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 19 13:52:30.246: INFO: all replica sets need to contain the pod-template-hash label Jun 19 13:52:30.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171549, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 19 13:52:32.246: INFO: all replica sets need to contain the pod-template-hash label Jun 19 13:52:32.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171549, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 19 13:52:34.247: INFO: all replica sets need to contain the pod-template-hash label Jun 19 13:52:34.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171549, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 19 13:52:36.247: INFO: all replica sets need to contain the pod-template-hash label Jun 19 13:52:36.247: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171549, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 19 13:52:38.248: INFO: all replica sets need to contain the pod-template-hash label Jun 19 13:52:38.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171549, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728171544, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 19 13:52:40.246: INFO: Jun 19 13:52:40.246: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 19 13:52:40.254: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-9125,SelfLink:/apis/apps/v1/namespaces/deployment-9125/deployments/test-rollover-deployment,UID:11602997-5872-44fb-80ca-2b253ed84815,ResourceVersion:17323550,Generation:2,CreationTimestamp:2020-06-19 13:52:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-06-19 13:52:24 +0000 UTC 2020-06-19 13:52:24 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-06-19 13:52:40 +0000 UTC 2020-06-19 13:52:24 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jun 19 13:52:40.258: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-9125,SelfLink:/apis/apps/v1/namespaces/deployment-9125/replicasets/test-rollover-deployment-854595fc44,UID:15232288-b9e4-40db-8def-402293c8b333,ResourceVersion:17323539,Generation:2,CreationTimestamp:2020-06-19 13:52:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 11602997-5872-44fb-80ca-2b253ed84815 0xc002532fd7 0xc002532fd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 19 13:52:40.258: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jun 19 13:52:40.258: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-9125,SelfLink:/apis/apps/v1/namespaces/deployment-9125/replicasets/test-rollover-controller,UID:4d6d5288-0bf6-4d3a-bcd7-21ccfb6cc21a,ResourceVersion:17323548,Generation:2,CreationTimestamp:2020-06-19 13:52:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 11602997-5872-44fb-80ca-2b253ed84815 0xc002532f07 0xc002532f08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 19 13:52:40.259: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-9125,SelfLink:/apis/apps/v1/namespaces/deployment-9125/replicasets/test-rollover-deployment-9b8b997cf,UID:73accc12-8e91-4ab3-927a-d7951808a74e,ResourceVersion:17323501,Generation:2,CreationTimestamp:2020-06-19 13:52:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 11602997-5872-44fb-80ca-2b253ed84815 0xc0025330b0 0xc0025330b1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 19 13:52:40.263: INFO: Pod "test-rollover-deployment-854595fc44-b8bth" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-b8bth,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-9125,SelfLink:/api/v1/namespaces/deployment-9125/pods/test-rollover-deployment-854595fc44-b8bth,UID:7839df02-3967-477f-af89-5abdf888e677,ResourceVersion:17323517,Generation:0,CreationTimestamp:2020-06-19 13:52:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 15232288-b9e4-40db-8def-402293c8b333 0xc002533ca7 0xc002533ca8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-f5qs2 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-f5qs2,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-f5qs2 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002533d20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002533d40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:52:26 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:52:29 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:52:29 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 13:52:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.106,StartTime:2020-06-19 13:52:26 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-06-19 13:52:29 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://b625250d5d99e9af5e861f9e21c3e91233ac19959cef47d75458a5bf66ad5d46}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:52:40.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9125" for this suite. Jun 19 13:52:46.496: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:52:46.563: INFO: namespace deployment-9125 deletion completed in 6.295787038s • [SLOW TEST:29.474 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:52:46.563: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 19 13:52:46.636: INFO: Waiting up to 5m0s for pod "pod-61c251d2-5ddd-4471-a0bc-6330fe1a2153" in namespace "emptydir-4698" to be "success or failure" Jun 19 13:52:46.645: INFO: Pod "pod-61c251d2-5ddd-4471-a0bc-6330fe1a2153": Phase="Pending", Reason="", readiness=false. Elapsed: 9.775504ms Jun 19 13:52:48.650: INFO: Pod "pod-61c251d2-5ddd-4471-a0bc-6330fe1a2153": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013982883s Jun 19 13:52:50.654: INFO: Pod "pod-61c251d2-5ddd-4471-a0bc-6330fe1a2153": Phase="Running", Reason="", readiness=true. Elapsed: 4.018653401s Jun 19 13:52:52.659: INFO: Pod "pod-61c251d2-5ddd-4471-a0bc-6330fe1a2153": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.023018355s STEP: Saw pod success Jun 19 13:52:52.659: INFO: Pod "pod-61c251d2-5ddd-4471-a0bc-6330fe1a2153" satisfied condition "success or failure" Jun 19 13:52:52.663: INFO: Trying to get logs from node iruya-worker pod pod-61c251d2-5ddd-4471-a0bc-6330fe1a2153 container test-container: STEP: delete the pod Jun 19 13:52:52.694: INFO: Waiting for pod pod-61c251d2-5ddd-4471-a0bc-6330fe1a2153 to disappear Jun 19 13:52:52.706: INFO: Pod pod-61c251d2-5ddd-4471-a0bc-6330fe1a2153 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:52:52.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4698" for this suite. Jun 19 13:52:58.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:52:58.807: INFO: namespace emptydir-4698 deletion completed in 6.09720977s • [SLOW TEST:12.244 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:52:58.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-330f66c0-9710-4843-92b9-8fe83407f8fa in namespace container-probe-1320 Jun 19 13:53:02.902: INFO: Started pod liveness-330f66c0-9710-4843-92b9-8fe83407f8fa in namespace container-probe-1320 STEP: checking the pod's current state and verifying that restartCount is present Jun 19 13:53:02.905: INFO: Initial restart count of pod liveness-330f66c0-9710-4843-92b9-8fe83407f8fa is 0 Jun 19 13:53:20.985: INFO: Restart count of pod container-probe-1320/liveness-330f66c0-9710-4843-92b9-8fe83407f8fa is now 1 (18.079852012s elapsed) Jun 19 13:53:41.046: INFO: Restart count of pod container-probe-1320/liveness-330f66c0-9710-4843-92b9-8fe83407f8fa is now 2 (38.140682392s elapsed) Jun 19 13:54:01.152: INFO: Restart count of pod container-probe-1320/liveness-330f66c0-9710-4843-92b9-8fe83407f8fa is now 3 (58.246226608s elapsed) Jun 19 13:54:21.242: INFO: Restart count of pod container-probe-1320/liveness-330f66c0-9710-4843-92b9-8fe83407f8fa is now 4 (1m18.337001571s elapsed) Jun 19 13:55:31.573: INFO: Restart count of pod container-probe-1320/liveness-330f66c0-9710-4843-92b9-8fe83407f8fa is now 5 (2m28.667731449s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:55:31.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1320" for this suite. Jun 19 13:55:37.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:55:37.668: INFO: namespace container-probe-1320 deletion completed in 6.078391534s • [SLOW TEST:158.861 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:55:37.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 19 13:55:37.730: INFO: Waiting up to 5m0s for pod "downward-api-28e71be8-3a40-4faf-bd95-e7824314a300" in namespace "downward-api-7557" to be "success or failure" Jun 19 13:55:37.796: INFO: Pod "downward-api-28e71be8-3a40-4faf-bd95-e7824314a300": Phase="Pending", Reason="", readiness=false. Elapsed: 66.224578ms Jun 19 13:55:39.800: INFO: Pod "downward-api-28e71be8-3a40-4faf-bd95-e7824314a300": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069716306s Jun 19 13:55:41.805: INFO: Pod "downward-api-28e71be8-3a40-4faf-bd95-e7824314a300": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.075156635s STEP: Saw pod success Jun 19 13:55:41.805: INFO: Pod "downward-api-28e71be8-3a40-4faf-bd95-e7824314a300" satisfied condition "success or failure" Jun 19 13:55:41.809: INFO: Trying to get logs from node iruya-worker2 pod downward-api-28e71be8-3a40-4faf-bd95-e7824314a300 container dapi-container: STEP: delete the pod Jun 19 13:55:41.850: INFO: Waiting for pod downward-api-28e71be8-3a40-4faf-bd95-e7824314a300 to disappear Jun 19 13:55:41.852: INFO: Pod downward-api-28e71be8-3a40-4faf-bd95-e7824314a300 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:55:41.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7557" for this suite. Jun 19 13:55:47.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:55:48.003: INFO: namespace downward-api-7557 deletion completed in 6.148026625s • [SLOW TEST:10.335 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:55:48.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jun 19 13:55:48.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6638' Jun 19 13:55:48.408: INFO: stderr: "" Jun 19 13:55:48.408: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jun 19 13:55:48.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6638' Jun 19 13:55:48.533: INFO: stderr: "" Jun 19 13:55:48.533: INFO: stdout: "update-demo-nautilus-mdd4z update-demo-nautilus-xck6m " Jun 19 13:55:48.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mdd4z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6638' Jun 19 13:55:48.630: INFO: stderr: "" Jun 19 13:55:48.630: INFO: stdout: "" Jun 19 13:55:48.630: INFO: update-demo-nautilus-mdd4z is created but not running Jun 19 13:55:53.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6638' Jun 19 13:55:53.741: INFO: stderr: "" Jun 19 13:55:53.741: INFO: stdout: "update-demo-nautilus-mdd4z update-demo-nautilus-xck6m " Jun 19 13:55:53.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mdd4z -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6638' Jun 19 13:55:53.832: INFO: stderr: "" Jun 19 13:55:53.832: INFO: stdout: "true" Jun 19 13:55:53.832: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mdd4z -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6638' Jun 19 13:55:53.919: INFO: stderr: "" Jun 19 13:55:53.919: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 19 13:55:53.919: INFO: validating pod update-demo-nautilus-mdd4z Jun 19 13:55:53.923: INFO: got data: { "image": "nautilus.jpg" } Jun 19 13:55:53.923: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 19 13:55:53.923: INFO: update-demo-nautilus-mdd4z is verified up and running Jun 19 13:55:53.923: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xck6m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6638' Jun 19 13:55:54.020: INFO: stderr: "" Jun 19 13:55:54.020: INFO: stdout: "true" Jun 19 13:55:54.021: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-xck6m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6638' Jun 19 13:55:54.117: INFO: stderr: "" Jun 19 13:55:54.117: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jun 19 13:55:54.117: INFO: validating pod update-demo-nautilus-xck6m Jun 19 13:55:54.122: INFO: got data: { "image": "nautilus.jpg" } Jun 19 13:55:54.122: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jun 19 13:55:54.122: INFO: update-demo-nautilus-xck6m is verified up and running STEP: using delete to clean up resources Jun 19 13:55:54.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6638' Jun 19 13:55:54.242: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 19 13:55:54.242: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jun 19 13:55:54.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6638' Jun 19 13:55:54.336: INFO: stderr: "No resources found.\n" Jun 19 13:55:54.336: INFO: stdout: "" Jun 19 13:55:54.336: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6638 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 19 13:55:54.425: INFO: stderr: "" Jun 19 13:55:54.425: INFO: stdout: "update-demo-nautilus-mdd4z\nupdate-demo-nautilus-xck6m\n" Jun 19 13:55:54.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6638' Jun 19 13:55:55.034: INFO: stderr: "No resources found.\n" Jun 19 13:55:55.035: INFO: stdout: "" Jun 19 13:55:55.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6638 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 19 13:55:55.129: INFO: stderr: "" Jun 19 13:55:55.129: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:55:55.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6638" for this suite. Jun 19 13:56:17.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:56:17.301: INFO: namespace kubectl-6638 deletion completed in 22.168111673s • [SLOW TEST:29.298 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:56:17.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:57:17.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5790" for this suite. Jun 19 13:57:39.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:57:39.488: INFO: namespace container-probe-5790 deletion completed in 22.089273325s • [SLOW TEST:82.186 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:57:39.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 13:57:39.575: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a3fba426-95be-4f0e-a52a-a6898e796aeb" in namespace "projected-2949" to be "success or failure" Jun 19 13:57:39.593: INFO: Pod "downwardapi-volume-a3fba426-95be-4f0e-a52a-a6898e796aeb": Phase="Pending", Reason="", readiness=false. Elapsed: 17.86899ms Jun 19 13:57:41.598: INFO: Pod "downwardapi-volume-a3fba426-95be-4f0e-a52a-a6898e796aeb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022364528s Jun 19 13:57:43.602: INFO: Pod "downwardapi-volume-a3fba426-95be-4f0e-a52a-a6898e796aeb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026157332s STEP: Saw pod success Jun 19 13:57:43.602: INFO: Pod "downwardapi-volume-a3fba426-95be-4f0e-a52a-a6898e796aeb" satisfied condition "success or failure" Jun 19 13:57:43.604: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a3fba426-95be-4f0e-a52a-a6898e796aeb container client-container: STEP: delete the pod Jun 19 13:57:43.675: INFO: Waiting for pod downwardapi-volume-a3fba426-95be-4f0e-a52a-a6898e796aeb to disappear Jun 19 13:57:43.682: INFO: Pod downwardapi-volume-a3fba426-95be-4f0e-a52a-a6898e796aeb no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:57:43.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2949" for this suite. Jun 19 13:57:49.709: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:57:49.790: INFO: namespace projected-2949 deletion completed in 6.103609196s • [SLOW TEST:10.302 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:57:49.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 19 13:57:49.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3355' Jun 19 13:57:52.648: INFO: stderr: "" Jun 19 13:57:52.648: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jun 19 13:57:52.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3355' Jun 19 13:58:02.163: INFO: stderr: "" Jun 19 13:58:02.163: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:58:02.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3355" for this suite. Jun 19 13:58:08.192: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:58:08.294: INFO: namespace kubectl-3355 deletion completed in 6.12733402s • [SLOW TEST:18.504 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:58:08.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:58:08.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4548" for this suite. Jun 19 13:58:38.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:58:38.502: INFO: namespace pods-4548 deletion completed in 30.100718642s • [SLOW TEST:30.208 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:58:38.503: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jun 19 13:58:38.578: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:58:52.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9118" for this suite. Jun 19 13:58:58.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:58:58.338: INFO: namespace pods-9118 deletion completed in 6.130713452s • [SLOW TEST:19.835 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:58:58.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-c9c13c6f-5713-4a95-89f0-ffaa0b8482d9 STEP: Creating a pod to test consume secrets Jun 19 13:58:58.446: INFO: Waiting up to 5m0s for pod "pod-secrets-1e7e0b6d-4d4e-4010-8541-c8579abcdcaf" in namespace "secrets-1504" to be "success or failure" Jun 19 13:58:58.456: INFO: Pod "pod-secrets-1e7e0b6d-4d4e-4010-8541-c8579abcdcaf": Phase="Pending", Reason="", readiness=false. Elapsed: 9.208406ms Jun 19 13:59:00.499: INFO: Pod "pod-secrets-1e7e0b6d-4d4e-4010-8541-c8579abcdcaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053021554s Jun 19 13:59:02.504: INFO: Pod "pod-secrets-1e7e0b6d-4d4e-4010-8541-c8579abcdcaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05737346s STEP: Saw pod success Jun 19 13:59:02.504: INFO: Pod "pod-secrets-1e7e0b6d-4d4e-4010-8541-c8579abcdcaf" satisfied condition "success or failure" Jun 19 13:59:02.507: INFO: Trying to get logs from node iruya-worker pod pod-secrets-1e7e0b6d-4d4e-4010-8541-c8579abcdcaf container secret-volume-test: STEP: delete the pod Jun 19 13:59:02.528: INFO: Waiting for pod pod-secrets-1e7e0b6d-4d4e-4010-8541-c8579abcdcaf to disappear Jun 19 13:59:02.533: INFO: Pod pod-secrets-1e7e0b6d-4d4e-4010-8541-c8579abcdcaf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 13:59:02.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1504" for this suite. Jun 19 13:59:08.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 13:59:08.669: INFO: namespace secrets-1504 deletion completed in 6.132196826s • [SLOW TEST:10.331 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 13:59:08.669: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-7c8c9cd6-9bcb-48ea-877c-a84f92c51195 STEP: Creating configMap with name cm-test-opt-upd-70599a90-17c5-47b6-bdb1-b5a23da7b4a9 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-7c8c9cd6-9bcb-48ea-877c-a84f92c51195 STEP: Updating configmap cm-test-opt-upd-70599a90-17c5-47b6-bdb1-b5a23da7b4a9 STEP: Creating configMap with name cm-test-opt-create-62664f17-5876-44ae-8c5c-320e731655a4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:00:37.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1677" for this suite. Jun 19 14:00:59.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:00:59.422: INFO: namespace configmap-1677 deletion completed in 22.101473964s • [SLOW TEST:110.753 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:00:59.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 14:00:59.527: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Jun 19 14:00:59.559: INFO: Number of nodes with available pods: 0 Jun 19 14:00:59.559: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Jun 19 14:00:59.631: INFO: Number of nodes with available pods: 0 Jun 19 14:00:59.631: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:00.747: INFO: Number of nodes with available pods: 0 Jun 19 14:01:00.747: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:01.635: INFO: Number of nodes with available pods: 0 Jun 19 14:01:01.635: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:02.635: INFO: Number of nodes with available pods: 0 Jun 19 14:01:02.635: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:03.635: INFO: Number of nodes with available pods: 1 Jun 19 14:01:03.635: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Jun 19 14:01:03.674: INFO: Number of nodes with available pods: 1 Jun 19 14:01:03.674: INFO: Number of running nodes: 0, number of available pods: 1 Jun 19 14:01:04.678: INFO: Number of nodes with available pods: 0 Jun 19 14:01:04.678: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Jun 19 14:01:04.703: INFO: Number of nodes with available pods: 0 Jun 19 14:01:04.703: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:05.747: INFO: Number of nodes with available pods: 0 Jun 19 14:01:05.747: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:06.708: INFO: Number of nodes with available pods: 0 Jun 19 14:01:06.708: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:07.707: INFO: Number of nodes with available pods: 0 Jun 19 14:01:07.707: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:08.707: INFO: Number of nodes with available pods: 0 Jun 19 14:01:08.707: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:09.707: INFO: Number of nodes with available pods: 0 Jun 19 14:01:09.707: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:10.708: INFO: Number of nodes with available pods: 0 Jun 19 14:01:10.708: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:11.716: INFO: Number of nodes with available pods: 0 Jun 19 14:01:11.716: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:12.707: INFO: Number of nodes with available pods: 0 Jun 19 14:01:12.708: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:13.717: INFO: Number of nodes with available pods: 0 Jun 19 14:01:13.717: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:14.712: INFO: Number of nodes with available pods: 0 Jun 19 14:01:14.712: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:01:15.707: INFO: Number of nodes with available pods: 1 Jun 19 14:01:15.707: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3270, will wait for the garbage collector to delete the pods Jun 19 14:01:15.772: INFO: Deleting DaemonSet.extensions daemon-set took: 6.786077ms Jun 19 14:01:16.073: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.282416ms Jun 19 14:01:22.176: INFO: Number of nodes with available pods: 0 Jun 19 14:01:22.176: INFO: Number of running nodes: 0, number of available pods: 0 Jun 19 14:01:22.178: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3270/daemonsets","resourceVersion":"17324996"},"items":null} Jun 19 14:01:22.181: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3270/pods","resourceVersion":"17324996"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:01:22.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3270" for this suite. Jun 19 14:01:28.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:01:28.322: INFO: namespace daemonsets-3270 deletion completed in 6.098755562s • [SLOW TEST:28.898 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:01:28.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 14:01:28.409: INFO: Waiting up to 5m0s for pod "downwardapi-volume-815ce5a6-c305-45ff-953a-a7e765277451" in namespace "projected-4327" to be "success or failure" Jun 19 14:01:28.412: INFO: Pod "downwardapi-volume-815ce5a6-c305-45ff-953a-a7e765277451": Phase="Pending", Reason="", readiness=false. Elapsed: 3.041251ms Jun 19 14:01:30.417: INFO: Pod "downwardapi-volume-815ce5a6-c305-45ff-953a-a7e765277451": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007495928s Jun 19 14:01:32.421: INFO: Pod "downwardapi-volume-815ce5a6-c305-45ff-953a-a7e765277451": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011818237s STEP: Saw pod success Jun 19 14:01:32.421: INFO: Pod "downwardapi-volume-815ce5a6-c305-45ff-953a-a7e765277451" satisfied condition "success or failure" Jun 19 14:01:32.423: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-815ce5a6-c305-45ff-953a-a7e765277451 container client-container: STEP: delete the pod Jun 19 14:01:32.448: INFO: Waiting for pod downwardapi-volume-815ce5a6-c305-45ff-953a-a7e765277451 to disappear Jun 19 14:01:32.452: INFO: Pod downwardapi-volume-815ce5a6-c305-45ff-953a-a7e765277451 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:01:32.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4327" for this suite. Jun 19 14:01:38.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:01:38.618: INFO: namespace projected-4327 deletion completed in 6.16284365s • [SLOW TEST:10.295 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:01:38.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-824 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-824 to expose endpoints map[] Jun 19 14:01:38.735: INFO: Get endpoints failed (32.127788ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Jun 19 14:01:39.739: INFO: successfully validated that service endpoint-test2 in namespace services-824 exposes endpoints map[] (1.036463757s elapsed) STEP: Creating pod pod1 in namespace services-824 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-824 to expose endpoints map[pod1:[80]] Jun 19 14:01:43.792: INFO: successfully validated that service endpoint-test2 in namespace services-824 exposes endpoints map[pod1:[80]] (4.045654379s elapsed) STEP: Creating pod pod2 in namespace services-824 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-824 to expose endpoints map[pod1:[80] pod2:[80]] Jun 19 14:01:47.852: INFO: successfully validated that service endpoint-test2 in namespace services-824 exposes endpoints map[pod1:[80] pod2:[80]] (4.055938033s elapsed) STEP: Deleting pod pod1 in namespace services-824 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-824 to expose endpoints map[pod2:[80]] Jun 19 14:01:48.879: INFO: successfully validated that service endpoint-test2 in namespace services-824 exposes endpoints map[pod2:[80]] (1.021955531s elapsed) STEP: Deleting pod pod2 in namespace services-824 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-824 to expose endpoints map[] Jun 19 14:01:48.901: INFO: successfully validated that service endpoint-test2 in namespace services-824 exposes endpoints map[] (17.584864ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:01:48.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-824" for this suite. Jun 19 14:02:10.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:02:11.035: INFO: namespace services-824 deletion completed in 22.097939764s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.416 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:02:11.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Jun 19 14:02:11.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6306' Jun 19 14:02:11.338: INFO: stderr: "" Jun 19 14:02:11.338: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Jun 19 14:02:12.342: INFO: Selector matched 1 pods for map[app:redis] Jun 19 14:02:12.342: INFO: Found 0 / 1 Jun 19 14:02:13.343: INFO: Selector matched 1 pods for map[app:redis] Jun 19 14:02:13.343: INFO: Found 0 / 1 Jun 19 14:02:14.342: INFO: Selector matched 1 pods for map[app:redis] Jun 19 14:02:14.342: INFO: Found 0 / 1 Jun 19 14:02:15.344: INFO: Selector matched 1 pods for map[app:redis] Jun 19 14:02:15.344: INFO: Found 1 / 1 Jun 19 14:02:15.344: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 19 14:02:15.348: INFO: Selector matched 1 pods for map[app:redis] Jun 19 14:02:15.348: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Jun 19 14:02:15.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vqrb2 redis-master --namespace=kubectl-6306' Jun 19 14:02:15.458: INFO: stderr: "" Jun 19 14:02:15.458: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 19 Jun 14:02:14.146 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Jun 14:02:14.146 # Server started, Redis version 3.2.12\n1:M 19 Jun 14:02:14.146 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Jun 14:02:14.146 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Jun 19 14:02:15.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vqrb2 redis-master --namespace=kubectl-6306 --tail=1' Jun 19 14:02:15.569: INFO: stderr: "" Jun 19 14:02:15.569: INFO: stdout: "1:M 19 Jun 14:02:14.146 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Jun 19 14:02:15.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vqrb2 redis-master --namespace=kubectl-6306 --limit-bytes=1' Jun 19 14:02:15.683: INFO: stderr: "" Jun 19 14:02:15.683: INFO: stdout: " " STEP: exposing timestamps Jun 19 14:02:15.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vqrb2 redis-master --namespace=kubectl-6306 --tail=1 --timestamps' Jun 19 14:02:15.798: INFO: stderr: "" Jun 19 14:02:15.798: INFO: stdout: "2020-06-19T14:02:14.146736377Z 1:M 19 Jun 14:02:14.146 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Jun 19 14:02:18.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vqrb2 redis-master --namespace=kubectl-6306 --since=1s' Jun 19 14:02:18.416: INFO: stderr: "" Jun 19 14:02:18.416: INFO: stdout: "" Jun 19 14:02:18.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-vqrb2 redis-master --namespace=kubectl-6306 --since=24h' Jun 19 14:02:18.528: INFO: stderr: "" Jun 19 14:02:18.528: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 19 Jun 14:02:14.146 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 19 Jun 14:02:14.146 # Server started, Redis version 3.2.12\n1:M 19 Jun 14:02:14.146 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 19 Jun 14:02:14.146 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Jun 19 14:02:18.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6306' Jun 19 14:02:18.657: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 19 14:02:18.657: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Jun 19 14:02:18.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-6306' Jun 19 14:02:18.760: INFO: stderr: "No resources found.\n" Jun 19 14:02:18.760: INFO: stdout: "" Jun 19 14:02:18.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-6306 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jun 19 14:02:18.868: INFO: stderr: "" Jun 19 14:02:18.868: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:02:18.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6306" for this suite. Jun 19 14:02:40.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:02:40.953: INFO: namespace kubectl-6306 deletion completed in 22.081866718s • [SLOW TEST:29.917 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:02:40.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-b881f6bc-2e3a-4aa1-868b-6132fa514d6d STEP: Creating a pod to test consume configMaps Jun 19 14:02:41.066: INFO: Waiting up to 5m0s for pod "pod-configmaps-10e0ffed-3770-4327-8a3e-13e076a1a1f7" in namespace "configmap-7069" to be "success or failure" Jun 19 14:02:41.070: INFO: Pod "pod-configmaps-10e0ffed-3770-4327-8a3e-13e076a1a1f7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.436225ms Jun 19 14:02:43.074: INFO: Pod "pod-configmaps-10e0ffed-3770-4327-8a3e-13e076a1a1f7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007692829s Jun 19 14:02:45.079: INFO: Pod "pod-configmaps-10e0ffed-3770-4327-8a3e-13e076a1a1f7": Phase="Running", Reason="", readiness=true. Elapsed: 4.012424675s Jun 19 14:02:47.083: INFO: Pod "pod-configmaps-10e0ffed-3770-4327-8a3e-13e076a1a1f7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.017132068s STEP: Saw pod success Jun 19 14:02:47.083: INFO: Pod "pod-configmaps-10e0ffed-3770-4327-8a3e-13e076a1a1f7" satisfied condition "success or failure" Jun 19 14:02:47.086: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-10e0ffed-3770-4327-8a3e-13e076a1a1f7 container configmap-volume-test: STEP: delete the pod Jun 19 14:02:47.107: INFO: Waiting for pod pod-configmaps-10e0ffed-3770-4327-8a3e-13e076a1a1f7 to disappear Jun 19 14:02:47.111: INFO: Pod pod-configmaps-10e0ffed-3770-4327-8a3e-13e076a1a1f7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:02:47.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7069" for this suite. Jun 19 14:02:53.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:02:53.231: INFO: namespace configmap-7069 deletion completed in 6.116105788s • [SLOW TEST:12.278 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:02:53.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-35ba68b5-3d97-4c1c-af25-267f7f89a1dc in namespace container-probe-6620 Jun 19 14:02:57.322: INFO: Started pod test-webserver-35ba68b5-3d97-4c1c-af25-267f7f89a1dc in namespace container-probe-6620 STEP: checking the pod's current state and verifying that restartCount is present Jun 19 14:02:57.325: INFO: Initial restart count of pod test-webserver-35ba68b5-3d97-4c1c-af25-267f7f89a1dc is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:06:57.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6620" for this suite. Jun 19 14:07:03.936: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:07:04.008: INFO: namespace container-probe-6620 deletion completed in 6.144291242s • [SLOW TEST:250.778 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:07:04.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-15883368-3ec3-4902-963d-72597e5b64ec STEP: Creating a pod to test consume secrets Jun 19 14:07:04.127: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-368d66ed-b4df-4d8a-b93e-d6f5cc936371" in namespace "projected-6992" to be "success or failure" Jun 19 14:07:04.130: INFO: Pod "pod-projected-secrets-368d66ed-b4df-4d8a-b93e-d6f5cc936371": Phase="Pending", Reason="", readiness=false. Elapsed: 3.446853ms Jun 19 14:07:06.134: INFO: Pod "pod-projected-secrets-368d66ed-b4df-4d8a-b93e-d6f5cc936371": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007508098s Jun 19 14:07:08.139: INFO: Pod "pod-projected-secrets-368d66ed-b4df-4d8a-b93e-d6f5cc936371": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012060694s STEP: Saw pod success Jun 19 14:07:08.139: INFO: Pod "pod-projected-secrets-368d66ed-b4df-4d8a-b93e-d6f5cc936371" satisfied condition "success or failure" Jun 19 14:07:08.142: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-368d66ed-b4df-4d8a-b93e-d6f5cc936371 container secret-volume-test: STEP: delete the pod Jun 19 14:07:08.162: INFO: Waiting for pod pod-projected-secrets-368d66ed-b4df-4d8a-b93e-d6f5cc936371 to disappear Jun 19 14:07:08.166: INFO: Pod pod-projected-secrets-368d66ed-b4df-4d8a-b93e-d6f5cc936371 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:07:08.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6992" for this suite. Jun 19 14:07:14.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:07:14.295: INFO: namespace projected-6992 deletion completed in 6.125858277s • [SLOW TEST:10.287 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:07:14.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7689 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Jun 19 14:07:14.397: INFO: Found 0 stateful pods, waiting for 3 Jun 19 14:07:24.402: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 19 14:07:24.402: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 19 14:07:24.402: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jun 19 14:07:34.402: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jun 19 14:07:34.402: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jun 19 14:07:34.402: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jun 19 14:07:34.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7689 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 19 14:07:34.708: INFO: stderr: "I0619 14:07:34.545644 2957 log.go:172] (0xc000962370) (0xc00077e6e0) Create stream\nI0619 14:07:34.545707 2957 log.go:172] (0xc000962370) (0xc00077e6e0) Stream added, broadcasting: 1\nI0619 14:07:34.548536 2957 log.go:172] (0xc000962370) Reply frame received for 1\nI0619 14:07:34.548583 2957 log.go:172] (0xc000962370) (0xc00090a000) Create stream\nI0619 14:07:34.548603 2957 log.go:172] (0xc000962370) (0xc00090a000) Stream added, broadcasting: 3\nI0619 14:07:34.549914 2957 log.go:172] (0xc000962370) Reply frame received for 3\nI0619 14:07:34.549952 2957 log.go:172] (0xc000962370) (0xc0007be280) Create stream\nI0619 14:07:34.549973 2957 log.go:172] (0xc000962370) (0xc0007be280) Stream added, broadcasting: 5\nI0619 14:07:34.550871 2957 log.go:172] (0xc000962370) Reply frame received for 5\nI0619 14:07:34.665936 2957 log.go:172] (0xc000962370) Data frame received for 5\nI0619 14:07:34.665958 2957 log.go:172] (0xc0007be280) (5) Data frame handling\nI0619 14:07:34.665969 2957 log.go:172] (0xc0007be280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0619 14:07:34.699522 2957 log.go:172] (0xc000962370) Data frame received for 3\nI0619 14:07:34.699558 2957 log.go:172] (0xc00090a000) (3) Data frame handling\nI0619 14:07:34.699584 2957 log.go:172] (0xc00090a000) (3) Data frame sent\nI0619 14:07:34.699998 2957 log.go:172] (0xc000962370) Data frame received for 3\nI0619 14:07:34.700035 2957 log.go:172] (0xc00090a000) (3) Data frame handling\nI0619 14:07:34.700067 2957 log.go:172] (0xc000962370) Data frame received for 5\nI0619 14:07:34.700093 2957 log.go:172] (0xc0007be280) (5) Data frame handling\nI0619 14:07:34.701726 2957 log.go:172] (0xc000962370) Data frame received for 1\nI0619 14:07:34.701745 2957 log.go:172] (0xc00077e6e0) (1) Data frame handling\nI0619 14:07:34.701757 2957 log.go:172] (0xc00077e6e0) (1) Data frame sent\nI0619 14:07:34.701768 2957 log.go:172] (0xc000962370) (0xc00077e6e0) Stream removed, broadcasting: 1\nI0619 14:07:34.701896 2957 log.go:172] (0xc000962370) Go away received\nI0619 14:07:34.702083 2957 log.go:172] (0xc000962370) (0xc00077e6e0) Stream removed, broadcasting: 1\nI0619 14:07:34.702096 2957 log.go:172] (0xc000962370) (0xc00090a000) Stream removed, broadcasting: 3\nI0619 14:07:34.702103 2957 log.go:172] (0xc000962370) (0xc0007be280) Stream removed, broadcasting: 5\n" Jun 19 14:07:34.708: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 19 14:07:34.708: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Jun 19 14:07:44.741: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jun 19 14:07:54.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7689 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 14:07:57.774: INFO: stderr: "I0619 14:07:57.706860 2977 log.go:172] (0xc00059c420) (0xc00010ea00) Create stream\nI0619 14:07:57.706896 2977 log.go:172] (0xc00059c420) (0xc00010ea00) Stream added, broadcasting: 1\nI0619 14:07:57.709866 2977 log.go:172] (0xc00059c420) Reply frame received for 1\nI0619 14:07:57.709924 2977 log.go:172] (0xc00059c420) (0xc00010eaa0) Create stream\nI0619 14:07:57.709941 2977 log.go:172] (0xc00059c420) (0xc00010eaa0) Stream added, broadcasting: 3\nI0619 14:07:57.711085 2977 log.go:172] (0xc00059c420) Reply frame received for 3\nI0619 14:07:57.711126 2977 log.go:172] (0xc00059c420) (0xc00091c000) Create stream\nI0619 14:07:57.711141 2977 log.go:172] (0xc00059c420) (0xc00091c000) Stream added, broadcasting: 5\nI0619 14:07:57.711902 2977 log.go:172] (0xc00059c420) Reply frame received for 5\nI0619 14:07:57.766153 2977 log.go:172] (0xc00059c420) Data frame received for 5\nI0619 14:07:57.766188 2977 log.go:172] (0xc00091c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0619 14:07:57.766226 2977 log.go:172] (0xc00059c420) Data frame received for 3\nI0619 14:07:57.766270 2977 log.go:172] (0xc00010eaa0) (3) Data frame handling\nI0619 14:07:57.766298 2977 log.go:172] (0xc00010eaa0) (3) Data frame sent\nI0619 14:07:57.766323 2977 log.go:172] (0xc00059c420) Data frame received for 3\nI0619 14:07:57.766344 2977 log.go:172] (0xc00010eaa0) (3) Data frame handling\nI0619 14:07:57.766374 2977 log.go:172] (0xc00091c000) (5) Data frame sent\nI0619 14:07:57.766401 2977 log.go:172] (0xc00059c420) Data frame received for 5\nI0619 14:07:57.766420 2977 log.go:172] (0xc00091c000) (5) Data frame handling\nI0619 14:07:57.767837 2977 log.go:172] (0xc00059c420) Data frame received for 1\nI0619 14:07:57.767859 2977 log.go:172] (0xc00010ea00) (1) Data frame handling\nI0619 14:07:57.767878 2977 log.go:172] (0xc00010ea00) (1) Data frame sent\nI0619 14:07:57.767892 2977 log.go:172] (0xc00059c420) (0xc00010ea00) Stream removed, broadcasting: 1\nI0619 14:07:57.768203 2977 log.go:172] (0xc00059c420) Go away received\nI0619 14:07:57.768263 2977 log.go:172] (0xc00059c420) (0xc00010ea00) Stream removed, broadcasting: 1\nI0619 14:07:57.768287 2977 log.go:172] (0xc00059c420) (0xc00010eaa0) Stream removed, broadcasting: 3\nI0619 14:07:57.768300 2977 log.go:172] (0xc00059c420) (0xc00091c000) Stream removed, broadcasting: 5\n" Jun 19 14:07:57.774: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 19 14:07:57.774: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 19 14:08:17.796: INFO: Waiting for StatefulSet statefulset-7689/ss2 to complete update Jun 19 14:08:17.797: INFO: Waiting for Pod statefulset-7689/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Jun 19 14:08:27.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7689 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 19 14:08:28.064: INFO: stderr: "I0619 14:08:27.935761 3008 log.go:172] (0xc0009ac370) (0xc00080e6e0) Create stream\nI0619 14:08:27.935832 3008 log.go:172] (0xc0009ac370) (0xc00080e6e0) Stream added, broadcasting: 1\nI0619 14:08:27.938301 3008 log.go:172] (0xc0009ac370) Reply frame received for 1\nI0619 14:08:27.938331 3008 log.go:172] (0xc0009ac370) (0xc00010e140) Create stream\nI0619 14:08:27.938352 3008 log.go:172] (0xc0009ac370) (0xc00010e140) Stream added, broadcasting: 3\nI0619 14:08:27.939263 3008 log.go:172] (0xc0009ac370) Reply frame received for 3\nI0619 14:08:27.939306 3008 log.go:172] (0xc0009ac370) (0xc00096e000) Create stream\nI0619 14:08:27.939325 3008 log.go:172] (0xc0009ac370) (0xc00096e000) Stream added, broadcasting: 5\nI0619 14:08:27.940334 3008 log.go:172] (0xc0009ac370) Reply frame received for 5\nI0619 14:08:28.030115 3008 log.go:172] (0xc0009ac370) Data frame received for 5\nI0619 14:08:28.030155 3008 log.go:172] (0xc00096e000) (5) Data frame handling\nI0619 14:08:28.030182 3008 log.go:172] (0xc00096e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0619 14:08:28.054871 3008 log.go:172] (0xc0009ac370) Data frame received for 3\nI0619 14:08:28.054923 3008 log.go:172] (0xc00010e140) (3) Data frame handling\nI0619 14:08:28.054946 3008 log.go:172] (0xc0009ac370) Data frame received for 5\nI0619 14:08:28.054978 3008 log.go:172] (0xc00096e000) (5) Data frame handling\nI0619 14:08:28.055003 3008 log.go:172] (0xc00010e140) (3) Data frame sent\nI0619 14:08:28.055452 3008 log.go:172] (0xc0009ac370) Data frame received for 3\nI0619 14:08:28.055488 3008 log.go:172] (0xc00010e140) (3) Data frame handling\nI0619 14:08:28.057736 3008 log.go:172] (0xc0009ac370) Data frame received for 1\nI0619 14:08:28.057761 3008 log.go:172] (0xc00080e6e0) (1) Data frame handling\nI0619 14:08:28.057773 3008 log.go:172] (0xc00080e6e0) (1) Data frame sent\nI0619 14:08:28.057791 3008 log.go:172] (0xc0009ac370) (0xc00080e6e0) Stream removed, broadcasting: 1\nI0619 14:08:28.057820 3008 log.go:172] (0xc0009ac370) Go away received\nI0619 14:08:28.058166 3008 log.go:172] (0xc0009ac370) (0xc00080e6e0) Stream removed, broadcasting: 1\nI0619 14:08:28.058183 3008 log.go:172] (0xc0009ac370) (0xc00010e140) Stream removed, broadcasting: 3\nI0619 14:08:28.058189 3008 log.go:172] (0xc0009ac370) (0xc00096e000) Stream removed, broadcasting: 5\n" Jun 19 14:08:28.064: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 19 14:08:28.064: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 19 14:08:38.095: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jun 19 14:08:48.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7689 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 14:08:48.333: INFO: stderr: "I0619 14:08:48.259476 3029 log.go:172] (0xc00080a6e0) (0xc000a68960) Create stream\nI0619 14:08:48.259534 3029 log.go:172] (0xc00080a6e0) (0xc000a68960) Stream added, broadcasting: 1\nI0619 14:08:48.264034 3029 log.go:172] (0xc00080a6e0) Reply frame received for 1\nI0619 14:08:48.264079 3029 log.go:172] (0xc00080a6e0) (0xc000a68000) Create stream\nI0619 14:08:48.264094 3029 log.go:172] (0xc00080a6e0) (0xc000a68000) Stream added, broadcasting: 3\nI0619 14:08:48.265320 3029 log.go:172] (0xc00080a6e0) Reply frame received for 3\nI0619 14:08:48.265380 3029 log.go:172] (0xc00080a6e0) (0xc000a680a0) Create stream\nI0619 14:08:48.265415 3029 log.go:172] (0xc00080a6e0) (0xc000a680a0) Stream added, broadcasting: 5\nI0619 14:08:48.266311 3029 log.go:172] (0xc00080a6e0) Reply frame received for 5\nI0619 14:08:48.325590 3029 log.go:172] (0xc00080a6e0) Data frame received for 3\nI0619 14:08:48.325623 3029 log.go:172] (0xc000a68000) (3) Data frame handling\nI0619 14:08:48.325639 3029 log.go:172] (0xc000a68000) (3) Data frame sent\nI0619 14:08:48.325666 3029 log.go:172] (0xc00080a6e0) Data frame received for 5\nI0619 14:08:48.325677 3029 log.go:172] (0xc000a680a0) (5) Data frame handling\nI0619 14:08:48.325688 3029 log.go:172] (0xc000a680a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0619 14:08:48.325699 3029 log.go:172] (0xc00080a6e0) Data frame received for 5\nI0619 14:08:48.325717 3029 log.go:172] (0xc000a680a0) (5) Data frame handling\nI0619 14:08:48.325734 3029 log.go:172] (0xc00080a6e0) Data frame received for 3\nI0619 14:08:48.325743 3029 log.go:172] (0xc000a68000) (3) Data frame handling\nI0619 14:08:48.326913 3029 log.go:172] (0xc00080a6e0) Data frame received for 1\nI0619 14:08:48.326926 3029 log.go:172] (0xc000a68960) (1) Data frame handling\nI0619 14:08:48.326938 3029 log.go:172] (0xc000a68960) (1) Data frame sent\nI0619 14:08:48.326953 3029 log.go:172] (0xc00080a6e0) (0xc000a68960) Stream removed, broadcasting: 1\nI0619 14:08:48.326970 3029 log.go:172] (0xc00080a6e0) Go away received\nI0619 14:08:48.327253 3029 log.go:172] (0xc00080a6e0) (0xc000a68960) Stream removed, broadcasting: 1\nI0619 14:08:48.327267 3029 log.go:172] (0xc00080a6e0) (0xc000a68000) Stream removed, broadcasting: 3\nI0619 14:08:48.327272 3029 log.go:172] (0xc00080a6e0) (0xc000a680a0) Stream removed, broadcasting: 5\n" Jun 19 14:08:48.333: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 19 14:08:48.334: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 19 14:08:58.366: INFO: Waiting for StatefulSet statefulset-7689/ss2 to complete update Jun 19 14:08:58.366: INFO: Waiting for Pod statefulset-7689/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 19 14:08:58.366: INFO: Waiting for Pod statefulset-7689/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 19 14:08:58.366: INFO: Waiting for Pod statefulset-7689/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 19 14:09:08.375: INFO: Waiting for StatefulSet statefulset-7689/ss2 to complete update Jun 19 14:09:08.375: INFO: Waiting for Pod statefulset-7689/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 19 14:09:08.375: INFO: Waiting for Pod statefulset-7689/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Jun 19 14:09:18.374: INFO: Waiting for StatefulSet statefulset-7689/ss2 to complete update Jun 19 14:09:18.375: INFO: Waiting for Pod statefulset-7689/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 19 14:09:28.389: INFO: Deleting all statefulset in ns statefulset-7689 Jun 19 14:09:28.392: INFO: Scaling statefulset ss2 to 0 Jun 19 14:09:58.410: INFO: Waiting for statefulset status.replicas updated to 0 Jun 19 14:09:58.413: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:09:58.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7689" for this suite. Jun 19 14:10:06.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:10:06.540: INFO: namespace statefulset-7689 deletion completed in 8.097629207s • [SLOW TEST:172.245 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:10:06.541: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4264 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 19 14:10:06.617: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 19 14:10:32.733: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.127:8080/dial?request=hostName&protocol=udp&host=10.244.2.126&port=8081&tries=1'] Namespace:pod-network-test-4264 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 14:10:32.733: INFO: >>> kubeConfig: /root/.kube/config I0619 14:10:32.768072 6 log.go:172] (0xc000e64790) (0xc00297d9a0) Create stream I0619 14:10:32.768097 6 log.go:172] (0xc000e64790) (0xc00297d9a0) Stream added, broadcasting: 1 I0619 14:10:32.771215 6 log.go:172] (0xc000e64790) Reply frame received for 1 I0619 14:10:32.771277 6 log.go:172] (0xc000e64790) (0xc001698fa0) Create stream I0619 14:10:32.771342 6 log.go:172] (0xc000e64790) (0xc001698fa0) Stream added, broadcasting: 3 I0619 14:10:32.772723 6 log.go:172] (0xc000e64790) Reply frame received for 3 I0619 14:10:32.772783 6 log.go:172] (0xc000e64790) (0xc0001d8e60) Create stream I0619 14:10:32.772807 6 log.go:172] (0xc000e64790) (0xc0001d8e60) Stream added, broadcasting: 5 I0619 14:10:32.774144 6 log.go:172] (0xc000e64790) Reply frame received for 5 I0619 14:10:32.927379 6 log.go:172] (0xc000e64790) Data frame received for 3 I0619 14:10:32.927413 6 log.go:172] (0xc001698fa0) (3) Data frame handling I0619 14:10:32.927437 6 log.go:172] (0xc001698fa0) (3) Data frame sent I0619 14:10:32.928019 6 log.go:172] (0xc000e64790) Data frame received for 3 I0619 14:10:32.928055 6 log.go:172] (0xc001698fa0) (3) Data frame handling I0619 14:10:32.928562 6 log.go:172] (0xc000e64790) Data frame received for 5 I0619 14:10:32.928593 6 log.go:172] (0xc0001d8e60) (5) Data frame handling I0619 14:10:32.930499 6 log.go:172] (0xc000e64790) Data frame received for 1 I0619 14:10:32.930578 6 log.go:172] (0xc00297d9a0) (1) Data frame handling I0619 14:10:32.930648 6 log.go:172] (0xc00297d9a0) (1) Data frame sent I0619 14:10:32.930685 6 log.go:172] (0xc000e64790) (0xc00297d9a0) Stream removed, broadcasting: 1 I0619 14:10:32.930721 6 log.go:172] (0xc000e64790) Go away received I0619 14:10:32.930818 6 log.go:172] (0xc000e64790) (0xc00297d9a0) Stream removed, broadcasting: 1 I0619 14:10:32.930839 6 log.go:172] (0xc000e64790) (0xc001698fa0) Stream removed, broadcasting: 3 I0619 14:10:32.930861 6 log.go:172] (0xc000e64790) (0xc0001d8e60) Stream removed, broadcasting: 5 Jun 19 14:10:32.930: INFO: Waiting for endpoints: map[] Jun 19 14:10:32.940: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.127:8080/dial?request=hostName&protocol=udp&host=10.244.1.104&port=8081&tries=1'] Namespace:pod-network-test-4264 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 14:10:32.941: INFO: >>> kubeConfig: /root/.kube/config I0619 14:10:32.975295 6 log.go:172] (0xc000dbc580) (0xc0034ac460) Create stream I0619 14:10:32.975326 6 log.go:172] (0xc000dbc580) (0xc0034ac460) Stream added, broadcasting: 1 I0619 14:10:32.983793 6 log.go:172] (0xc000dbc580) Reply frame received for 1 I0619 14:10:32.983862 6 log.go:172] (0xc000dbc580) (0xc0034ac500) Create stream I0619 14:10:32.983891 6 log.go:172] (0xc000dbc580) (0xc0034ac500) Stream added, broadcasting: 3 I0619 14:10:32.985646 6 log.go:172] (0xc000dbc580) Reply frame received for 3 I0619 14:10:32.985704 6 log.go:172] (0xc000dbc580) (0xc0017d4dc0) Create stream I0619 14:10:32.985733 6 log.go:172] (0xc000dbc580) (0xc0017d4dc0) Stream added, broadcasting: 5 I0619 14:10:32.987325 6 log.go:172] (0xc000dbc580) Reply frame received for 5 I0619 14:10:33.050549 6 log.go:172] (0xc000dbc580) Data frame received for 3 I0619 14:10:33.050599 6 log.go:172] (0xc0034ac500) (3) Data frame handling I0619 14:10:33.050638 6 log.go:172] (0xc0034ac500) (3) Data frame sent I0619 14:10:33.050989 6 log.go:172] (0xc000dbc580) Data frame received for 5 I0619 14:10:33.051032 6 log.go:172] (0xc000dbc580) Data frame received for 3 I0619 14:10:33.051083 6 log.go:172] (0xc0034ac500) (3) Data frame handling I0619 14:10:33.051123 6 log.go:172] (0xc0017d4dc0) (5) Data frame handling I0619 14:10:33.052849 6 log.go:172] (0xc000dbc580) Data frame received for 1 I0619 14:10:33.052879 6 log.go:172] (0xc0034ac460) (1) Data frame handling I0619 14:10:33.052897 6 log.go:172] (0xc0034ac460) (1) Data frame sent I0619 14:10:33.052928 6 log.go:172] (0xc000dbc580) (0xc0034ac460) Stream removed, broadcasting: 1 I0619 14:10:33.052963 6 log.go:172] (0xc000dbc580) Go away received I0619 14:10:33.053074 6 log.go:172] (0xc000dbc580) (0xc0034ac460) Stream removed, broadcasting: 1 I0619 14:10:33.053098 6 log.go:172] (0xc000dbc580) (0xc0034ac500) Stream removed, broadcasting: 3 I0619 14:10:33.053343 6 log.go:172] (0xc000dbc580) (0xc0017d4dc0) Stream removed, broadcasting: 5 Jun 19 14:10:33.053: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:10:33.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4264" for this suite. Jun 19 14:10:55.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:10:55.154: INFO: namespace pod-network-test-4264 deletion completed in 22.097386335s • [SLOW TEST:48.613 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:10:55.155: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-ba5ccf6f-23c1-46cd-b114-203dc96f3a66 STEP: Creating a pod to test consume secrets Jun 19 14:10:55.211: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-fe3b395d-ccf3-45db-bd16-d0415b98dec3" in namespace "projected-7334" to be "success or failure" Jun 19 14:10:55.225: INFO: Pod "pod-projected-secrets-fe3b395d-ccf3-45db-bd16-d0415b98dec3": Phase="Pending", Reason="", readiness=false. Elapsed: 14.07625ms Jun 19 14:10:57.229: INFO: Pod "pod-projected-secrets-fe3b395d-ccf3-45db-bd16-d0415b98dec3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017971661s Jun 19 14:10:59.232: INFO: Pod "pod-projected-secrets-fe3b395d-ccf3-45db-bd16-d0415b98dec3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02116665s STEP: Saw pod success Jun 19 14:10:59.232: INFO: Pod "pod-projected-secrets-fe3b395d-ccf3-45db-bd16-d0415b98dec3" satisfied condition "success or failure" Jun 19 14:10:59.234: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-fe3b395d-ccf3-45db-bd16-d0415b98dec3 container projected-secret-volume-test: STEP: delete the pod Jun 19 14:10:59.275: INFO: Waiting for pod pod-projected-secrets-fe3b395d-ccf3-45db-bd16-d0415b98dec3 to disappear Jun 19 14:10:59.286: INFO: Pod pod-projected-secrets-fe3b395d-ccf3-45db-bd16-d0415b98dec3 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:10:59.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7334" for this suite. Jun 19 14:11:05.302: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:11:05.379: INFO: namespace projected-7334 deletion completed in 6.08924922s • [SLOW TEST:10.225 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:11:05.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 14:11:05.433: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05b18109-4b72-436d-b029-21a1204d2a15" in namespace "downward-api-3775" to be "success or failure" Jun 19 14:11:05.436: INFO: Pod "downwardapi-volume-05b18109-4b72-436d-b029-21a1204d2a15": Phase="Pending", Reason="", readiness=false. Elapsed: 3.365773ms Jun 19 14:11:07.440: INFO: Pod "downwardapi-volume-05b18109-4b72-436d-b029-21a1204d2a15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007098668s Jun 19 14:11:09.444: INFO: Pod "downwardapi-volume-05b18109-4b72-436d-b029-21a1204d2a15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010887162s STEP: Saw pod success Jun 19 14:11:09.444: INFO: Pod "downwardapi-volume-05b18109-4b72-436d-b029-21a1204d2a15" satisfied condition "success or failure" Jun 19 14:11:09.447: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-05b18109-4b72-436d-b029-21a1204d2a15 container client-container: STEP: delete the pod Jun 19 14:11:09.530: INFO: Waiting for pod downwardapi-volume-05b18109-4b72-436d-b029-21a1204d2a15 to disappear Jun 19 14:11:09.538: INFO: Pod downwardapi-volume-05b18109-4b72-436d-b029-21a1204d2a15 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:11:09.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3775" for this suite. Jun 19 14:11:15.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:11:15.640: INFO: namespace downward-api-3775 deletion completed in 6.095481993s • [SLOW TEST:10.260 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:11:15.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Jun 19 14:11:23.825: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:23.844: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:25.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:25.849: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:27.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:27.849: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:29.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:29.849: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:31.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:31.850: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:33.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:33.849: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:35.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:35.850: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:37.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:37.850: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:39.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:39.849: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:41.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:41.849: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:43.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:43.851: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:45.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:45.850: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:47.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:47.850: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:49.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:49.849: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:51.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:51.849: INFO: Pod pod-with-poststart-exec-hook still exists Jun 19 14:11:53.845: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Jun 19 14:11:53.849: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:11:53.849: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3837" for this suite. Jun 19 14:12:15.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:12:15.945: INFO: namespace container-lifecycle-hook-3837 deletion completed in 22.092404655s • [SLOW TEST:60.305 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:12:15.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3725.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3725.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3725.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3725.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 19 14:12:24.109: INFO: DNS probes using dns-test-0c2ec980-e683-4bad-a337-78676a18ed0b succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3725.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3725.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3725.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3725.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 19 14:12:32.232: INFO: File wheezy_udp@dns-test-service-3.dns-3725.svc.cluster.local from pod dns-3725/dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 19 14:12:32.236: INFO: File jessie_udp@dns-test-service-3.dns-3725.svc.cluster.local from pod dns-3725/dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 19 14:12:32.236: INFO: Lookups using dns-3725/dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d failed for: [wheezy_udp@dns-test-service-3.dns-3725.svc.cluster.local jessie_udp@dns-test-service-3.dns-3725.svc.cluster.local] Jun 19 14:12:37.240: INFO: File wheezy_udp@dns-test-service-3.dns-3725.svc.cluster.local from pod dns-3725/dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 19 14:12:37.243: INFO: File jessie_udp@dns-test-service-3.dns-3725.svc.cluster.local from pod dns-3725/dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 19 14:12:37.243: INFO: Lookups using dns-3725/dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d failed for: [wheezy_udp@dns-test-service-3.dns-3725.svc.cluster.local jessie_udp@dns-test-service-3.dns-3725.svc.cluster.local] Jun 19 14:12:42.241: INFO: File wheezy_udp@dns-test-service-3.dns-3725.svc.cluster.local from pod dns-3725/dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 19 14:12:42.245: INFO: File jessie_udp@dns-test-service-3.dns-3725.svc.cluster.local from pod dns-3725/dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 19 14:12:42.245: INFO: Lookups using dns-3725/dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d failed for: [wheezy_udp@dns-test-service-3.dns-3725.svc.cluster.local jessie_udp@dns-test-service-3.dns-3725.svc.cluster.local] Jun 19 14:12:47.242: INFO: File wheezy_udp@dns-test-service-3.dns-3725.svc.cluster.local from pod dns-3725/dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 19 14:12:47.246: INFO: File jessie_udp@dns-test-service-3.dns-3725.svc.cluster.local from pod dns-3725/dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 19 14:12:47.246: INFO: Lookups using dns-3725/dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d failed for: [wheezy_udp@dns-test-service-3.dns-3725.svc.cluster.local jessie_udp@dns-test-service-3.dns-3725.svc.cluster.local] Jun 19 14:12:52.245: INFO: File jessie_udp@dns-test-service-3.dns-3725.svc.cluster.local from pod dns-3725/dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d contains 'foo.example.com. ' instead of 'bar.example.com.' Jun 19 14:12:52.245: INFO: Lookups using dns-3725/dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d failed for: [jessie_udp@dns-test-service-3.dns-3725.svc.cluster.local] Jun 19 14:12:57.244: INFO: DNS probes using dns-test-b6b01ee8-eaeb-4b26-8aea-fcdc709c075d succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3725.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3725.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3725.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3725.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 19 14:13:03.805: INFO: DNS probes using dns-test-41458103-ea69-4577-94d5-7c81403c5ec6 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:13:03.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3725" for this suite. Jun 19 14:13:09.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:13:10.022: INFO: namespace dns-3725 deletion completed in 6.130866664s • [SLOW TEST:54.077 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:13:10.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 14:13:10.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a8ce61fa-cbb6-4079-b292-ce87dd8e1931" in namespace "projected-1566" to be "success or failure" Jun 19 14:13:10.104: INFO: Pod "downwardapi-volume-a8ce61fa-cbb6-4079-b292-ce87dd8e1931": Phase="Pending", Reason="", readiness=false. Elapsed: 3.679007ms Jun 19 14:13:12.225: INFO: Pod "downwardapi-volume-a8ce61fa-cbb6-4079-b292-ce87dd8e1931": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125076799s Jun 19 14:13:14.229: INFO: Pod "downwardapi-volume-a8ce61fa-cbb6-4079-b292-ce87dd8e1931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.129158198s STEP: Saw pod success Jun 19 14:13:14.229: INFO: Pod "downwardapi-volume-a8ce61fa-cbb6-4079-b292-ce87dd8e1931" satisfied condition "success or failure" Jun 19 14:13:14.233: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a8ce61fa-cbb6-4079-b292-ce87dd8e1931 container client-container: STEP: delete the pod Jun 19 14:13:14.256: INFO: Waiting for pod downwardapi-volume-a8ce61fa-cbb6-4079-b292-ce87dd8e1931 to disappear Jun 19 14:13:14.259: INFO: Pod downwardapi-volume-a8ce61fa-cbb6-4079-b292-ce87dd8e1931 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:13:14.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1566" for this suite. Jun 19 14:13:20.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:13:20.366: INFO: namespace projected-1566 deletion completed in 6.10303705s • [SLOW TEST:10.343 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:13:20.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Jun 19 14:13:20.436: INFO: Waiting up to 5m0s for pod "client-containers-d5403e2b-4cd5-4c5e-910c-9446f6ab7907" in namespace "containers-5101" to be "success or failure" Jun 19 14:13:20.464: INFO: Pod "client-containers-d5403e2b-4cd5-4c5e-910c-9446f6ab7907": Phase="Pending", Reason="", readiness=false. Elapsed: 27.874772ms Jun 19 14:13:22.468: INFO: Pod "client-containers-d5403e2b-4cd5-4c5e-910c-9446f6ab7907": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03235149s Jun 19 14:13:24.488: INFO: Pod "client-containers-d5403e2b-4cd5-4c5e-910c-9446f6ab7907": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.05219607s STEP: Saw pod success Jun 19 14:13:24.488: INFO: Pod "client-containers-d5403e2b-4cd5-4c5e-910c-9446f6ab7907" satisfied condition "success or failure" Jun 19 14:13:24.491: INFO: Trying to get logs from node iruya-worker2 pod client-containers-d5403e2b-4cd5-4c5e-910c-9446f6ab7907 container test-container: STEP: delete the pod Jun 19 14:13:24.513: INFO: Waiting for pod client-containers-d5403e2b-4cd5-4c5e-910c-9446f6ab7907 to disappear Jun 19 14:13:24.517: INFO: Pod client-containers-d5403e2b-4cd5-4c5e-910c-9446f6ab7907 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:13:24.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5101" for this suite. Jun 19 14:13:30.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:13:30.615: INFO: namespace containers-5101 deletion completed in 6.094110215s • [SLOW TEST:10.249 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:13:30.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1624 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1624 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1624 Jun 19 14:13:30.728: INFO: Found 0 stateful pods, waiting for 1 Jun 19 14:13:40.733: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Jun 19 14:13:40.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1624 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 19 14:13:41.025: INFO: stderr: "I0619 14:13:40.863075 3050 log.go:172] (0xc0009c60b0) (0xc0009706e0) Create stream\nI0619 14:13:40.863136 3050 log.go:172] (0xc0009c60b0) (0xc0009706e0) Stream added, broadcasting: 1\nI0619 14:13:40.865042 3050 log.go:172] (0xc0009c60b0) Reply frame received for 1\nI0619 14:13:40.865075 3050 log.go:172] (0xc0009c60b0) (0xc000970780) Create stream\nI0619 14:13:40.865084 3050 log.go:172] (0xc0009c60b0) (0xc000970780) Stream added, broadcasting: 3\nI0619 14:13:40.866078 3050 log.go:172] (0xc0009c60b0) Reply frame received for 3\nI0619 14:13:40.866106 3050 log.go:172] (0xc0009c60b0) (0xc000970820) Create stream\nI0619 14:13:40.866115 3050 log.go:172] (0xc0009c60b0) (0xc000970820) Stream added, broadcasting: 5\nI0619 14:13:40.866924 3050 log.go:172] (0xc0009c60b0) Reply frame received for 5\nI0619 14:13:40.963296 3050 log.go:172] (0xc0009c60b0) Data frame received for 5\nI0619 14:13:40.963343 3050 log.go:172] (0xc000970820) (5) Data frame handling\nI0619 14:13:40.963373 3050 log.go:172] (0xc000970820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0619 14:13:41.017095 3050 log.go:172] (0xc0009c60b0) Data frame received for 3\nI0619 14:13:41.017360 3050 log.go:172] (0xc000970780) (3) Data frame handling\nI0619 14:13:41.017406 3050 log.go:172] (0xc000970780) (3) Data frame sent\nI0619 14:13:41.017678 3050 log.go:172] (0xc0009c60b0) Data frame received for 5\nI0619 14:13:41.017733 3050 log.go:172] (0xc000970820) (5) Data frame handling\nI0619 14:13:41.017781 3050 log.go:172] (0xc0009c60b0) Data frame received for 3\nI0619 14:13:41.017803 3050 log.go:172] (0xc000970780) (3) Data frame handling\nI0619 14:13:41.019982 3050 log.go:172] (0xc0009c60b0) Data frame received for 1\nI0619 14:13:41.020016 3050 log.go:172] (0xc0009706e0) (1) Data frame handling\nI0619 14:13:41.020043 3050 log.go:172] (0xc0009706e0) (1) Data frame sent\nI0619 14:13:41.020065 3050 log.go:172] (0xc0009c60b0) (0xc0009706e0) Stream removed, broadcasting: 1\nI0619 14:13:41.020084 3050 log.go:172] (0xc0009c60b0) Go away received\nI0619 14:13:41.020509 3050 log.go:172] (0xc0009c60b0) (0xc0009706e0) Stream removed, broadcasting: 1\nI0619 14:13:41.020536 3050 log.go:172] (0xc0009c60b0) (0xc000970780) Stream removed, broadcasting: 3\nI0619 14:13:41.020555 3050 log.go:172] (0xc0009c60b0) (0xc000970820) Stream removed, broadcasting: 5\n" Jun 19 14:13:41.025: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 19 14:13:41.025: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 19 14:13:41.034: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jun 19 14:13:51.039: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 19 14:13:51.039: INFO: Waiting for statefulset status.replicas updated to 0 Jun 19 14:13:51.056: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999554s Jun 19 14:13:52.061: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.991897009s Jun 19 14:13:53.065: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.987089443s Jun 19 14:13:54.070: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.983078495s Jun 19 14:13:55.074: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.978437721s Jun 19 14:13:56.080: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.973949384s Jun 19 14:13:57.084: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.968756604s Jun 19 14:13:58.089: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.96441308s Jun 19 14:13:59.093: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.959647358s Jun 19 14:14:00.098: INFO: Verifying statefulset ss doesn't scale past 1 for another 955.561035ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1624 Jun 19 14:14:01.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1624 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 14:14:01.318: INFO: stderr: "I0619 14:14:01.231323 3068 log.go:172] (0xc000118e70) (0xc0005c4820) Create stream\nI0619 14:14:01.231377 3068 log.go:172] (0xc000118e70) (0xc0005c4820) Stream added, broadcasting: 1\nI0619 14:14:01.233876 3068 log.go:172] (0xc000118e70) Reply frame received for 1\nI0619 14:14:01.233912 3068 log.go:172] (0xc000118e70) (0xc0009be000) Create stream\nI0619 14:14:01.233927 3068 log.go:172] (0xc000118e70) (0xc0009be000) Stream added, broadcasting: 3\nI0619 14:14:01.234984 3068 log.go:172] (0xc000118e70) Reply frame received for 3\nI0619 14:14:01.235028 3068 log.go:172] (0xc000118e70) (0xc00098e000) Create stream\nI0619 14:14:01.235042 3068 log.go:172] (0xc000118e70) (0xc00098e000) Stream added, broadcasting: 5\nI0619 14:14:01.235991 3068 log.go:172] (0xc000118e70) Reply frame received for 5\nI0619 14:14:01.311829 3068 log.go:172] (0xc000118e70) Data frame received for 3\nI0619 14:14:01.311883 3068 log.go:172] (0xc000118e70) Data frame received for 5\nI0619 14:14:01.311900 3068 log.go:172] (0xc00098e000) (5) Data frame handling\nI0619 14:14:01.311911 3068 log.go:172] (0xc00098e000) (5) Data frame sent\nI0619 14:14:01.311918 3068 log.go:172] (0xc000118e70) Data frame received for 5\nI0619 14:14:01.311924 3068 log.go:172] (0xc00098e000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0619 14:14:01.311947 3068 log.go:172] (0xc0009be000) (3) Data frame handling\nI0619 14:14:01.311957 3068 log.go:172] (0xc0009be000) (3) Data frame sent\nI0619 14:14:01.312120 3068 log.go:172] (0xc000118e70) Data frame received for 3\nI0619 14:14:01.312137 3068 log.go:172] (0xc0009be000) (3) Data frame handling\nI0619 14:14:01.313818 3068 log.go:172] (0xc000118e70) Data frame received for 1\nI0619 14:14:01.313851 3068 log.go:172] (0xc0005c4820) (1) Data frame handling\nI0619 14:14:01.313881 3068 log.go:172] (0xc0005c4820) (1) Data frame sent\nI0619 14:14:01.313902 3068 log.go:172] (0xc000118e70) (0xc0005c4820) Stream removed, broadcasting: 1\nI0619 14:14:01.313918 3068 log.go:172] (0xc000118e70) Go away received\nI0619 14:14:01.314214 3068 log.go:172] (0xc000118e70) (0xc0005c4820) Stream removed, broadcasting: 1\nI0619 14:14:01.314226 3068 log.go:172] (0xc000118e70) (0xc0009be000) Stream removed, broadcasting: 3\nI0619 14:14:01.314232 3068 log.go:172] (0xc000118e70) (0xc00098e000) Stream removed, broadcasting: 5\n" Jun 19 14:14:01.318: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 19 14:14:01.318: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 19 14:14:01.322: INFO: Found 1 stateful pods, waiting for 3 Jun 19 14:14:11.328: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jun 19 14:14:11.328: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jun 19 14:14:11.328: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Jun 19 14:14:11.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1624 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 19 14:14:11.565: INFO: stderr: "I0619 14:14:11.468842 3088 log.go:172] (0xc0001169a0) (0xc0003daa00) Create stream\nI0619 14:14:11.468900 3088 log.go:172] (0xc0001169a0) (0xc0003daa00) Stream added, broadcasting: 1\nI0619 14:14:11.471032 3088 log.go:172] (0xc0001169a0) Reply frame received for 1\nI0619 14:14:11.471193 3088 log.go:172] (0xc0001169a0) (0xc0008a2000) Create stream\nI0619 14:14:11.471286 3088 log.go:172] (0xc0001169a0) (0xc0008a2000) Stream added, broadcasting: 3\nI0619 14:14:11.472633 3088 log.go:172] (0xc0001169a0) Reply frame received for 3\nI0619 14:14:11.472699 3088 log.go:172] (0xc0001169a0) (0xc000750000) Create stream\nI0619 14:14:11.472723 3088 log.go:172] (0xc0001169a0) (0xc000750000) Stream added, broadcasting: 5\nI0619 14:14:11.474013 3088 log.go:172] (0xc0001169a0) Reply frame received for 5\nI0619 14:14:11.557833 3088 log.go:172] (0xc0001169a0) Data frame received for 5\nI0619 14:14:11.557879 3088 log.go:172] (0xc0001169a0) Data frame received for 3\nI0619 14:14:11.557925 3088 log.go:172] (0xc0008a2000) (3) Data frame handling\nI0619 14:14:11.557950 3088 log.go:172] (0xc0008a2000) (3) Data frame sent\nI0619 14:14:11.557960 3088 log.go:172] (0xc0001169a0) Data frame received for 3\nI0619 14:14:11.557968 3088 log.go:172] (0xc0008a2000) (3) Data frame handling\nI0619 14:14:11.557987 3088 log.go:172] (0xc000750000) (5) Data frame handling\nI0619 14:14:11.558005 3088 log.go:172] (0xc000750000) (5) Data frame sent\nI0619 14:14:11.558014 3088 log.go:172] (0xc0001169a0) Data frame received for 5\nI0619 14:14:11.558021 3088 log.go:172] (0xc000750000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0619 14:14:11.558902 3088 log.go:172] (0xc0001169a0) Data frame received for 1\nI0619 14:14:11.558918 3088 log.go:172] (0xc0003daa00) (1) Data frame handling\nI0619 14:14:11.558926 3088 log.go:172] (0xc0003daa00) (1) Data frame sent\nI0619 14:14:11.558941 3088 log.go:172] (0xc0001169a0) (0xc0003daa00) Stream removed, broadcasting: 1\nI0619 14:14:11.558954 3088 log.go:172] (0xc0001169a0) Go away received\nI0619 14:14:11.559233 3088 log.go:172] (0xc0001169a0) (0xc0003daa00) Stream removed, broadcasting: 1\nI0619 14:14:11.559252 3088 log.go:172] (0xc0001169a0) (0xc0008a2000) Stream removed, broadcasting: 3\nI0619 14:14:11.559263 3088 log.go:172] (0xc0001169a0) (0xc000750000) Stream removed, broadcasting: 5\n" Jun 19 14:14:11.565: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 19 14:14:11.565: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 19 14:14:11.566: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1624 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 19 14:14:11.817: INFO: stderr: "I0619 14:14:11.713446 3109 log.go:172] (0xc0005a0420) (0xc0008a08c0) Create stream\nI0619 14:14:11.713511 3109 log.go:172] (0xc0005a0420) (0xc0008a08c0) Stream added, broadcasting: 1\nI0619 14:14:11.716864 3109 log.go:172] (0xc0005a0420) Reply frame received for 1\nI0619 14:14:11.716962 3109 log.go:172] (0xc0005a0420) (0xc0008a0000) Create stream\nI0619 14:14:11.716976 3109 log.go:172] (0xc0005a0420) (0xc0008a0000) Stream added, broadcasting: 3\nI0619 14:14:11.718243 3109 log.go:172] (0xc0005a0420) Reply frame received for 3\nI0619 14:14:11.718284 3109 log.go:172] (0xc0005a0420) (0xc00069a320) Create stream\nI0619 14:14:11.718296 3109 log.go:172] (0xc0005a0420) (0xc00069a320) Stream added, broadcasting: 5\nI0619 14:14:11.719396 3109 log.go:172] (0xc0005a0420) Reply frame received for 5\nI0619 14:14:11.781596 3109 log.go:172] (0xc0005a0420) Data frame received for 5\nI0619 14:14:11.781622 3109 log.go:172] (0xc00069a320) (5) Data frame handling\nI0619 14:14:11.781637 3109 log.go:172] (0xc00069a320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0619 14:14:11.808941 3109 log.go:172] (0xc0005a0420) Data frame received for 3\nI0619 14:14:11.808984 3109 log.go:172] (0xc0008a0000) (3) Data frame handling\nI0619 14:14:11.809010 3109 log.go:172] (0xc0008a0000) (3) Data frame sent\nI0619 14:14:11.809367 3109 log.go:172] (0xc0005a0420) Data frame received for 3\nI0619 14:14:11.809403 3109 log.go:172] (0xc0008a0000) (3) Data frame handling\nI0619 14:14:11.809426 3109 log.go:172] (0xc0005a0420) Data frame received for 5\nI0619 14:14:11.809439 3109 log.go:172] (0xc00069a320) (5) Data frame handling\nI0619 14:14:11.811397 3109 log.go:172] (0xc0005a0420) Data frame received for 1\nI0619 14:14:11.811420 3109 log.go:172] (0xc0008a08c0) (1) Data frame handling\nI0619 14:14:11.811442 3109 log.go:172] (0xc0008a08c0) (1) Data frame sent\nI0619 14:14:11.811462 3109 log.go:172] (0xc0005a0420) (0xc0008a08c0) Stream removed, broadcasting: 1\nI0619 14:14:11.811489 3109 log.go:172] (0xc0005a0420) Go away received\nI0619 14:14:11.811807 3109 log.go:172] (0xc0005a0420) (0xc0008a08c0) Stream removed, broadcasting: 1\nI0619 14:14:11.811831 3109 log.go:172] (0xc0005a0420) (0xc0008a0000) Stream removed, broadcasting: 3\nI0619 14:14:11.811849 3109 log.go:172] (0xc0005a0420) (0xc00069a320) Stream removed, broadcasting: 5\n" Jun 19 14:14:11.818: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 19 14:14:11.818: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 19 14:14:11.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1624 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jun 19 14:14:12.062: INFO: stderr: "I0619 14:14:11.949324 3132 log.go:172] (0xc0009b0420) (0xc0003926e0) Create stream\nI0619 14:14:11.949375 3132 log.go:172] (0xc0009b0420) (0xc0003926e0) Stream added, broadcasting: 1\nI0619 14:14:11.952077 3132 log.go:172] (0xc0009b0420) Reply frame received for 1\nI0619 14:14:11.952151 3132 log.go:172] (0xc0009b0420) (0xc0007c4320) Create stream\nI0619 14:14:11.952175 3132 log.go:172] (0xc0009b0420) (0xc0007c4320) Stream added, broadcasting: 3\nI0619 14:14:11.954249 3132 log.go:172] (0xc0009b0420) Reply frame received for 3\nI0619 14:14:11.954287 3132 log.go:172] (0xc0009b0420) (0xc000392780) Create stream\nI0619 14:14:11.954301 3132 log.go:172] (0xc0009b0420) (0xc000392780) Stream added, broadcasting: 5\nI0619 14:14:11.955483 3132 log.go:172] (0xc0009b0420) Reply frame received for 5\nI0619 14:14:12.020606 3132 log.go:172] (0xc0009b0420) Data frame received for 5\nI0619 14:14:12.020638 3132 log.go:172] (0xc000392780) (5) Data frame handling\nI0619 14:14:12.020655 3132 log.go:172] (0xc000392780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0619 14:14:12.055023 3132 log.go:172] (0xc0009b0420) Data frame received for 3\nI0619 14:14:12.055055 3132 log.go:172] (0xc0007c4320) (3) Data frame handling\nI0619 14:14:12.055062 3132 log.go:172] (0xc0007c4320) (3) Data frame sent\nI0619 14:14:12.055067 3132 log.go:172] (0xc0009b0420) Data frame received for 3\nI0619 14:14:12.055070 3132 log.go:172] (0xc0007c4320) (3) Data frame handling\nI0619 14:14:12.055094 3132 log.go:172] (0xc0009b0420) Data frame received for 5\nI0619 14:14:12.055101 3132 log.go:172] (0xc000392780) (5) Data frame handling\nI0619 14:14:12.056799 3132 log.go:172] (0xc0009b0420) Data frame received for 1\nI0619 14:14:12.056814 3132 log.go:172] (0xc0003926e0) (1) Data frame handling\nI0619 14:14:12.056824 3132 log.go:172] (0xc0003926e0) (1) Data frame sent\nI0619 14:14:12.056832 3132 log.go:172] (0xc0009b0420) (0xc0003926e0) Stream removed, broadcasting: 1\nI0619 14:14:12.056840 3132 log.go:172] (0xc0009b0420) Go away received\nI0619 14:14:12.057391 3132 log.go:172] (0xc0009b0420) (0xc0003926e0) Stream removed, broadcasting: 1\nI0619 14:14:12.057414 3132 log.go:172] (0xc0009b0420) (0xc0007c4320) Stream removed, broadcasting: 3\nI0619 14:14:12.057423 3132 log.go:172] (0xc0009b0420) (0xc000392780) Stream removed, broadcasting: 5\n" Jun 19 14:14:12.062: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jun 19 14:14:12.062: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jun 19 14:14:12.062: INFO: Waiting for statefulset status.replicas updated to 0 Jun 19 14:14:12.064: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jun 19 14:14:22.072: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jun 19 14:14:22.072: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jun 19 14:14:22.072: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jun 19 14:14:22.084: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999657s Jun 19 14:14:23.101: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.995678951s Jun 19 14:14:24.107: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.978279919s Jun 19 14:14:25.112: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.972151355s Jun 19 14:14:26.117: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.96678778s Jun 19 14:14:27.123: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.961868736s Jun 19 14:14:28.127: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.956651311s Jun 19 14:14:29.133: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.951806001s Jun 19 14:14:30.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.946414553s Jun 19 14:14:31.154: INFO: Verifying statefulset ss doesn't scale past 3 for another 930.32217ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1624 Jun 19 14:14:32.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1624 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 14:14:32.386: INFO: stderr: "I0619 14:14:32.283897 3152 log.go:172] (0xc00099e420) (0xc00055e820) Create stream\nI0619 14:14:32.283949 3152 log.go:172] (0xc00099e420) (0xc00055e820) Stream added, broadcasting: 1\nI0619 14:14:32.287856 3152 log.go:172] (0xc00099e420) Reply frame received for 1\nI0619 14:14:32.287931 3152 log.go:172] (0xc00099e420) (0xc00055e000) Create stream\nI0619 14:14:32.287953 3152 log.go:172] (0xc00099e420) (0xc00055e000) Stream added, broadcasting: 3\nI0619 14:14:32.289267 3152 log.go:172] (0xc00099e420) Reply frame received for 3\nI0619 14:14:32.289305 3152 log.go:172] (0xc00099e420) (0xc000562460) Create stream\nI0619 14:14:32.289324 3152 log.go:172] (0xc00099e420) (0xc000562460) Stream added, broadcasting: 5\nI0619 14:14:32.290318 3152 log.go:172] (0xc00099e420) Reply frame received for 5\nI0619 14:14:32.377083 3152 log.go:172] (0xc00099e420) Data frame received for 5\nI0619 14:14:32.377405 3152 log.go:172] (0xc000562460) (5) Data frame handling\nI0619 14:14:32.377422 3152 log.go:172] (0xc000562460) (5) Data frame sent\nI0619 14:14:32.377428 3152 log.go:172] (0xc00099e420) Data frame received for 5\nI0619 14:14:32.377432 3152 log.go:172] (0xc000562460) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0619 14:14:32.377460 3152 log.go:172] (0xc00099e420) Data frame received for 3\nI0619 14:14:32.377468 3152 log.go:172] (0xc00055e000) (3) Data frame handling\nI0619 14:14:32.377485 3152 log.go:172] (0xc00055e000) (3) Data frame sent\nI0619 14:14:32.377491 3152 log.go:172] (0xc00099e420) Data frame received for 3\nI0619 14:14:32.377495 3152 log.go:172] (0xc00055e000) (3) Data frame handling\nI0619 14:14:32.378763 3152 log.go:172] (0xc00099e420) Data frame received for 1\nI0619 14:14:32.378781 3152 log.go:172] (0xc00055e820) (1) Data frame handling\nI0619 14:14:32.378789 3152 log.go:172] (0xc00055e820) (1) Data frame sent\nI0619 14:14:32.378799 3152 log.go:172] (0xc00099e420) (0xc00055e820) Stream removed, broadcasting: 1\nI0619 14:14:32.378810 3152 log.go:172] (0xc00099e420) Go away received\nI0619 14:14:32.379165 3152 log.go:172] (0xc00099e420) (0xc00055e820) Stream removed, broadcasting: 1\nI0619 14:14:32.379193 3152 log.go:172] (0xc00099e420) (0xc00055e000) Stream removed, broadcasting: 3\nI0619 14:14:32.379207 3152 log.go:172] (0xc00099e420) (0xc000562460) Stream removed, broadcasting: 5\n" Jun 19 14:14:32.386: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 19 14:14:32.386: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 19 14:14:32.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1624 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 14:14:32.594: INFO: stderr: "I0619 14:14:32.519259 3174 log.go:172] (0xc000a2e630) (0xc000506be0) Create stream\nI0619 14:14:32.519304 3174 log.go:172] (0xc000a2e630) (0xc000506be0) Stream added, broadcasting: 1\nI0619 14:14:32.521950 3174 log.go:172] (0xc000a2e630) Reply frame received for 1\nI0619 14:14:32.521996 3174 log.go:172] (0xc000a2e630) (0xc000a40000) Create stream\nI0619 14:14:32.522015 3174 log.go:172] (0xc000a2e630) (0xc000a40000) Stream added, broadcasting: 3\nI0619 14:14:32.523643 3174 log.go:172] (0xc000a2e630) Reply frame received for 3\nI0619 14:14:32.523750 3174 log.go:172] (0xc000a2e630) (0xc000950000) Create stream\nI0619 14:14:32.523823 3174 log.go:172] (0xc000a2e630) (0xc000950000) Stream added, broadcasting: 5\nI0619 14:14:32.525601 3174 log.go:172] (0xc000a2e630) Reply frame received for 5\nI0619 14:14:32.584627 3174 log.go:172] (0xc000a2e630) Data frame received for 3\nI0619 14:14:32.584655 3174 log.go:172] (0xc000a40000) (3) Data frame handling\nI0619 14:14:32.584668 3174 log.go:172] (0xc000a40000) (3) Data frame sent\nI0619 14:14:32.584677 3174 log.go:172] (0xc000a2e630) Data frame received for 3\nI0619 14:14:32.584684 3174 log.go:172] (0xc000a40000) (3) Data frame handling\nI0619 14:14:32.584720 3174 log.go:172] (0xc000a2e630) Data frame received for 5\nI0619 14:14:32.584742 3174 log.go:172] (0xc000950000) (5) Data frame handling\nI0619 14:14:32.584785 3174 log.go:172] (0xc000950000) (5) Data frame sent\nI0619 14:14:32.584803 3174 log.go:172] (0xc000a2e630) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0619 14:14:32.584816 3174 log.go:172] (0xc000950000) (5) Data frame handling\nI0619 14:14:32.586694 3174 log.go:172] (0xc000a2e630) Data frame received for 1\nI0619 14:14:32.586728 3174 log.go:172] (0xc000506be0) (1) Data frame handling\nI0619 14:14:32.586747 3174 log.go:172] (0xc000506be0) (1) Data frame sent\nI0619 14:14:32.586770 3174 log.go:172] (0xc000a2e630) (0xc000506be0) Stream removed, broadcasting: 1\nI0619 14:14:32.586799 3174 log.go:172] (0xc000a2e630) Go away received\nI0619 14:14:32.587143 3174 log.go:172] (0xc000a2e630) (0xc000506be0) Stream removed, broadcasting: 1\nI0619 14:14:32.587165 3174 log.go:172] (0xc000a2e630) (0xc000a40000) Stream removed, broadcasting: 3\nI0619 14:14:32.587175 3174 log.go:172] (0xc000a2e630) (0xc000950000) Stream removed, broadcasting: 5\n" Jun 19 14:14:32.594: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 19 14:14:32.594: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 19 14:14:32.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1624 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jun 19 14:14:32.815: INFO: stderr: "I0619 14:14:32.742895 3196 log.go:172] (0xc0009b6370) (0xc00072e820) Create stream\nI0619 14:14:32.742973 3196 log.go:172] (0xc0009b6370) (0xc00072e820) Stream added, broadcasting: 1\nI0619 14:14:32.747866 3196 log.go:172] (0xc0009b6370) Reply frame received for 1\nI0619 14:14:32.747913 3196 log.go:172] (0xc0009b6370) (0xc00072e000) Create stream\nI0619 14:14:32.747927 3196 log.go:172] (0xc0009b6370) (0xc00072e000) Stream added, broadcasting: 3\nI0619 14:14:32.749054 3196 log.go:172] (0xc0009b6370) Reply frame received for 3\nI0619 14:14:32.749108 3196 log.go:172] (0xc0009b6370) (0xc00060e280) Create stream\nI0619 14:14:32.749344 3196 log.go:172] (0xc0009b6370) (0xc00060e280) Stream added, broadcasting: 5\nI0619 14:14:32.750310 3196 log.go:172] (0xc0009b6370) Reply frame received for 5\nI0619 14:14:32.805993 3196 log.go:172] (0xc0009b6370) Data frame received for 3\nI0619 14:14:32.806027 3196 log.go:172] (0xc00072e000) (3) Data frame handling\nI0619 14:14:32.806040 3196 log.go:172] (0xc00072e000) (3) Data frame sent\nI0619 14:14:32.806072 3196 log.go:172] (0xc0009b6370) Data frame received for 5\nI0619 14:14:32.806086 3196 log.go:172] (0xc00060e280) (5) Data frame handling\nI0619 14:14:32.806106 3196 log.go:172] (0xc00060e280) (5) Data frame sent\nI0619 14:14:32.806120 3196 log.go:172] (0xc0009b6370) Data frame received for 5\nI0619 14:14:32.806153 3196 log.go:172] (0xc00060e280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0619 14:14:32.806234 3196 log.go:172] (0xc0009b6370) Data frame received for 3\nI0619 14:14:32.806270 3196 log.go:172] (0xc00072e000) (3) Data frame handling\nI0619 14:14:32.808020 3196 log.go:172] (0xc0009b6370) Data frame received for 1\nI0619 14:14:32.808040 3196 log.go:172] (0xc00072e820) (1) Data frame handling\nI0619 14:14:32.808056 3196 log.go:172] (0xc00072e820) (1) Data frame sent\nI0619 14:14:32.808083 3196 log.go:172] (0xc0009b6370) (0xc00072e820) Stream removed, broadcasting: 1\nI0619 14:14:32.808375 3196 log.go:172] (0xc0009b6370) (0xc00072e820) Stream removed, broadcasting: 1\nI0619 14:14:32.808392 3196 log.go:172] (0xc0009b6370) (0xc00072e000) Stream removed, broadcasting: 3\nI0619 14:14:32.808528 3196 log.go:172] (0xc0009b6370) (0xc00060e280) Stream removed, broadcasting: 5\n" Jun 19 14:14:32.815: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jun 19 14:14:32.815: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jun 19 14:14:32.815: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jun 19 14:15:02.832: INFO: Deleting all statefulset in ns statefulset-1624 Jun 19 14:15:02.835: INFO: Scaling statefulset ss to 0 Jun 19 14:15:02.843: INFO: Waiting for statefulset status.replicas updated to 0 Jun 19 14:15:02.845: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:15:02.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1624" for this suite. Jun 19 14:15:08.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:15:08.964: INFO: namespace statefulset-1624 deletion completed in 6.086951923s • [SLOW TEST:98.348 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:15:08.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 19 14:15:09.068: INFO: Waiting up to 5m0s for pod "downward-api-85f3188b-9865-4d89-8e35-6228a46aaa8f" in namespace "downward-api-6197" to be "success or failure" Jun 19 14:15:09.143: INFO: Pod "downward-api-85f3188b-9865-4d89-8e35-6228a46aaa8f": Phase="Pending", Reason="", readiness=false. Elapsed: 74.9677ms Jun 19 14:15:11.147: INFO: Pod "downward-api-85f3188b-9865-4d89-8e35-6228a46aaa8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078805406s Jun 19 14:15:13.151: INFO: Pod "downward-api-85f3188b-9865-4d89-8e35-6228a46aaa8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.082934757s STEP: Saw pod success Jun 19 14:15:13.151: INFO: Pod "downward-api-85f3188b-9865-4d89-8e35-6228a46aaa8f" satisfied condition "success or failure" Jun 19 14:15:13.154: INFO: Trying to get logs from node iruya-worker2 pod downward-api-85f3188b-9865-4d89-8e35-6228a46aaa8f container dapi-container: STEP: delete the pod Jun 19 14:15:13.173: INFO: Waiting for pod downward-api-85f3188b-9865-4d89-8e35-6228a46aaa8f to disappear Jun 19 14:15:13.183: INFO: Pod downward-api-85f3188b-9865-4d89-8e35-6228a46aaa8f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:15:13.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6197" for this suite. Jun 19 14:15:19.208: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:15:19.283: INFO: namespace downward-api-6197 deletion completed in 6.096043404s • [SLOW TEST:10.318 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:15:19.284: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Jun 19 14:15:19.383: INFO: Waiting up to 5m0s for pod "var-expansion-07310d06-4c48-4273-8a8b-407600626e73" in namespace "var-expansion-4777" to be "success or failure" Jun 19 14:15:19.429: INFO: Pod "var-expansion-07310d06-4c48-4273-8a8b-407600626e73": Phase="Pending", Reason="", readiness=false. Elapsed: 46.316621ms Jun 19 14:15:21.433: INFO: Pod "var-expansion-07310d06-4c48-4273-8a8b-407600626e73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050403155s Jun 19 14:15:23.437: INFO: Pod "var-expansion-07310d06-4c48-4273-8a8b-407600626e73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054378325s STEP: Saw pod success Jun 19 14:15:23.437: INFO: Pod "var-expansion-07310d06-4c48-4273-8a8b-407600626e73" satisfied condition "success or failure" Jun 19 14:15:23.440: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-07310d06-4c48-4273-8a8b-407600626e73 container dapi-container: STEP: delete the pod Jun 19 14:15:23.467: INFO: Waiting for pod var-expansion-07310d06-4c48-4273-8a8b-407600626e73 to disappear Jun 19 14:15:23.508: INFO: Pod var-expansion-07310d06-4c48-4273-8a8b-407600626e73 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:15:23.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4777" for this suite. Jun 19 14:15:29.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:15:29.604: INFO: namespace var-expansion-4777 deletion completed in 6.092299305s • [SLOW TEST:10.320 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:15:29.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Jun 19 14:15:29.654: INFO: Waiting up to 5m0s for pod "client-containers-3f199723-cfb7-43d6-b0dc-c37f02aace1f" in namespace "containers-4619" to be "success or failure" Jun 19 14:15:29.665: INFO: Pod "client-containers-3f199723-cfb7-43d6-b0dc-c37f02aace1f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.029359ms Jun 19 14:15:31.670: INFO: Pod "client-containers-3f199723-cfb7-43d6-b0dc-c37f02aace1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016311401s Jun 19 14:15:33.675: INFO: Pod "client-containers-3f199723-cfb7-43d6-b0dc-c37f02aace1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020720271s STEP: Saw pod success Jun 19 14:15:33.675: INFO: Pod "client-containers-3f199723-cfb7-43d6-b0dc-c37f02aace1f" satisfied condition "success or failure" Jun 19 14:15:33.678: INFO: Trying to get logs from node iruya-worker2 pod client-containers-3f199723-cfb7-43d6-b0dc-c37f02aace1f container test-container: STEP: delete the pod Jun 19 14:15:33.698: INFO: Waiting for pod client-containers-3f199723-cfb7-43d6-b0dc-c37f02aace1f to disappear Jun 19 14:15:33.712: INFO: Pod client-containers-3f199723-cfb7-43d6-b0dc-c37f02aace1f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:15:33.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4619" for this suite. Jun 19 14:15:39.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:15:39.826: INFO: namespace containers-4619 deletion completed in 6.111051128s • [SLOW TEST:10.222 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:15:39.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 14:15:39.951: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Jun 19 14:15:44.956: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jun 19 14:15:44.957: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 19 14:15:45.005: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-4830,SelfLink:/apis/apps/v1/namespaces/deployment-4830/deployments/test-cleanup-deployment,UID:22a24f8b-c2b0-48b1-9430-7e443a1eee2f,ResourceVersion:17327839,Generation:1,CreationTimestamp:2020-06-19 14:15:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Jun 19 14:15:45.024: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-4830,SelfLink:/apis/apps/v1/namespaces/deployment-4830/replicasets/test-cleanup-deployment-55bbcbc84c,UID:5c1fac3b-da9e-4b0e-9194-0ac1b229b1d9,ResourceVersion:17327841,Generation:1,CreationTimestamp:2020-06-19 14:15:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 22a24f8b-c2b0-48b1-9430-7e443a1eee2f 0xc002d1caa7 0xc002d1caa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 19 14:15:45.024: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Jun 19 14:15:45.024: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-4830,SelfLink:/apis/apps/v1/namespaces/deployment-4830/replicasets/test-cleanup-controller,UID:04710f06-6e5b-4bdf-a26d-dd3b708ea9a2,ResourceVersion:17327840,Generation:1,CreationTimestamp:2020-06-19 14:15:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 22a24f8b-c2b0-48b1-9430-7e443a1eee2f 0xc002d1c9d7 0xc002d1c9d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jun 19 14:15:45.089: INFO: Pod "test-cleanup-controller-vvdx2" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-vvdx2,GenerateName:test-cleanup-controller-,Namespace:deployment-4830,SelfLink:/api/v1/namespaces/deployment-4830/pods/test-cleanup-controller-vvdx2,UID:b187f8f6-12e5-400c-8d10-f9c802b0eaf6,ResourceVersion:17327831,Generation:0,CreationTimestamp:2020-06-19 14:15:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 04710f06-6e5b-4bdf-a26d-dd3b708ea9a2 0xc0027bea3f 0xc0027bea50}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-n6f9t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6f9t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-n6f9t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027beac0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027beae0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:15:40 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:15:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:15:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:15:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.135,StartTime:2020-06-19 14:15:40 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-19 14:15:42 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0817228dc4cbe8fd4417c90262c4a15ab803a9df15c4aed66d8635b02e701526}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:15:45.089: INFO: Pod "test-cleanup-deployment-55bbcbc84c-dq94c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-dq94c,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-4830,SelfLink:/api/v1/namespaces/deployment-4830/pods/test-cleanup-deployment-55bbcbc84c-dq94c,UID:f15120cc-3cd6-4274-9de6-9cbfda6e6f01,ResourceVersion:17327846,Generation:0,CreationTimestamp:2020-06-19 14:15:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 5c1fac3b-da9e-4b0e-9194-0ac1b229b1d9 0xc0027bebc7 0xc0027bebc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-n6f9t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-n6f9t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-n6f9t true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027bec40} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027bec60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:15:45 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:15:45.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4830" for this suite. Jun 19 14:15:51.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:15:51.273: INFO: namespace deployment-4830 deletion completed in 6.126517524s • [SLOW TEST:11.447 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:15:51.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 14:15:51.379: INFO: Waiting up to 5m0s for pod "downwardapi-volume-745022d0-0fbb-4698-8b11-b01568f952ff" in namespace "downward-api-8939" to be "success or failure" Jun 19 14:15:51.455: INFO: Pod "downwardapi-volume-745022d0-0fbb-4698-8b11-b01568f952ff": Phase="Pending", Reason="", readiness=false. Elapsed: 75.371673ms Jun 19 14:15:53.459: INFO: Pod "downwardapi-volume-745022d0-0fbb-4698-8b11-b01568f952ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080100374s Jun 19 14:15:55.464: INFO: Pod "downwardapi-volume-745022d0-0fbb-4698-8b11-b01568f952ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084352652s STEP: Saw pod success Jun 19 14:15:55.464: INFO: Pod "downwardapi-volume-745022d0-0fbb-4698-8b11-b01568f952ff" satisfied condition "success or failure" Jun 19 14:15:55.467: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-745022d0-0fbb-4698-8b11-b01568f952ff container client-container: STEP: delete the pod Jun 19 14:15:55.510: INFO: Waiting for pod downwardapi-volume-745022d0-0fbb-4698-8b11-b01568f952ff to disappear Jun 19 14:15:55.521: INFO: Pod downwardapi-volume-745022d0-0fbb-4698-8b11-b01568f952ff no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:15:55.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8939" for this suite. Jun 19 14:16:01.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:16:01.659: INFO: namespace downward-api-8939 deletion completed in 6.134305726s • [SLOW TEST:10.385 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:16:01.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Jun 19 14:16:01.734: INFO: Waiting up to 5m0s for pod "pod-8f1ea520-e4f0-4ba1-889b-ce9299eae493" in namespace "emptydir-2557" to be "success or failure" Jun 19 14:16:01.738: INFO: Pod "pod-8f1ea520-e4f0-4ba1-889b-ce9299eae493": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068606ms Jun 19 14:16:03.772: INFO: Pod "pod-8f1ea520-e4f0-4ba1-889b-ce9299eae493": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038315937s Jun 19 14:16:05.776: INFO: Pod "pod-8f1ea520-e4f0-4ba1-889b-ce9299eae493": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042815284s STEP: Saw pod success Jun 19 14:16:05.776: INFO: Pod "pod-8f1ea520-e4f0-4ba1-889b-ce9299eae493" satisfied condition "success or failure" Jun 19 14:16:05.779: INFO: Trying to get logs from node iruya-worker pod pod-8f1ea520-e4f0-4ba1-889b-ce9299eae493 container test-container: STEP: delete the pod Jun 19 14:16:05.797: INFO: Waiting for pod pod-8f1ea520-e4f0-4ba1-889b-ce9299eae493 to disappear Jun 19 14:16:05.801: INFO: Pod pod-8f1ea520-e4f0-4ba1-889b-ce9299eae493 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:16:05.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2557" for this suite. Jun 19 14:16:11.823: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:16:11.905: INFO: namespace emptydir-2557 deletion completed in 6.100905415s • [SLOW TEST:10.246 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:16:11.906: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 19 14:16:11.947: INFO: Waiting up to 5m0s for pod "pod-64b4a9f4-669a-48c9-9d77-14d557fd6a69" in namespace "emptydir-3719" to be "success or failure" Jun 19 14:16:11.963: INFO: Pod "pod-64b4a9f4-669a-48c9-9d77-14d557fd6a69": Phase="Pending", Reason="", readiness=false. Elapsed: 16.219938ms Jun 19 14:16:13.982: INFO: Pod "pod-64b4a9f4-669a-48c9-9d77-14d557fd6a69": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034683562s Jun 19 14:16:15.986: INFO: Pod "pod-64b4a9f4-669a-48c9-9d77-14d557fd6a69": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038804148s STEP: Saw pod success Jun 19 14:16:15.986: INFO: Pod "pod-64b4a9f4-669a-48c9-9d77-14d557fd6a69" satisfied condition "success or failure" Jun 19 14:16:15.988: INFO: Trying to get logs from node iruya-worker2 pod pod-64b4a9f4-669a-48c9-9d77-14d557fd6a69 container test-container: STEP: delete the pod Jun 19 14:16:16.163: INFO: Waiting for pod pod-64b4a9f4-669a-48c9-9d77-14d557fd6a69 to disappear Jun 19 14:16:16.179: INFO: Pod pod-64b4a9f4-669a-48c9-9d77-14d557fd6a69 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:16:16.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3719" for this suite. Jun 19 14:16:22.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:16:22.327: INFO: namespace emptydir-3719 deletion completed in 6.142783924s • [SLOW TEST:10.421 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:16:22.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 19 14:16:22.396: INFO: Waiting up to 5m0s for pod "downward-api-b879bd5b-a462-4c47-b2a6-561cf5d930f9" in namespace "downward-api-6054" to be "success or failure" Jun 19 14:16:22.427: INFO: Pod "downward-api-b879bd5b-a462-4c47-b2a6-561cf5d930f9": Phase="Pending", Reason="", readiness=false. Elapsed: 31.072172ms Jun 19 14:16:24.431: INFO: Pod "downward-api-b879bd5b-a462-4c47-b2a6-561cf5d930f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034922702s Jun 19 14:16:26.434: INFO: Pod "downward-api-b879bd5b-a462-4c47-b2a6-561cf5d930f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03856845s STEP: Saw pod success Jun 19 14:16:26.434: INFO: Pod "downward-api-b879bd5b-a462-4c47-b2a6-561cf5d930f9" satisfied condition "success or failure" Jun 19 14:16:26.438: INFO: Trying to get logs from node iruya-worker pod downward-api-b879bd5b-a462-4c47-b2a6-561cf5d930f9 container dapi-container: STEP: delete the pod Jun 19 14:16:26.504: INFO: Waiting for pod downward-api-b879bd5b-a462-4c47-b2a6-561cf5d930f9 to disappear Jun 19 14:16:26.507: INFO: Pod downward-api-b879bd5b-a462-4c47-b2a6-561cf5d930f9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:16:26.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6054" for this suite. Jun 19 14:16:32.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:16:32.610: INFO: namespace downward-api-6054 deletion completed in 6.099193575s • [SLOW TEST:10.283 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:16:32.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 14:16:32.686: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0abb95ce-00c0-4c81-be61-9db0f7076b96" in namespace "projected-1047" to be "success or failure" Jun 19 14:16:32.718: INFO: Pod "downwardapi-volume-0abb95ce-00c0-4c81-be61-9db0f7076b96": Phase="Pending", Reason="", readiness=false. Elapsed: 32.26054ms Jun 19 14:16:34.724: INFO: Pod "downwardapi-volume-0abb95ce-00c0-4c81-be61-9db0f7076b96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037435735s Jun 19 14:16:36.728: INFO: Pod "downwardapi-volume-0abb95ce-00c0-4c81-be61-9db0f7076b96": Phase="Running", Reason="", readiness=true. Elapsed: 4.041848477s Jun 19 14:16:38.742: INFO: Pod "downwardapi-volume-0abb95ce-00c0-4c81-be61-9db0f7076b96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.055892864s STEP: Saw pod success Jun 19 14:16:38.742: INFO: Pod "downwardapi-volume-0abb95ce-00c0-4c81-be61-9db0f7076b96" satisfied condition "success or failure" Jun 19 14:16:38.745: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-0abb95ce-00c0-4c81-be61-9db0f7076b96 container client-container: STEP: delete the pod Jun 19 14:16:38.763: INFO: Waiting for pod downwardapi-volume-0abb95ce-00c0-4c81-be61-9db0f7076b96 to disappear Jun 19 14:16:38.767: INFO: Pod downwardapi-volume-0abb95ce-00c0-4c81-be61-9db0f7076b96 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:16:38.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1047" for this suite. Jun 19 14:16:44.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:16:44.860: INFO: namespace projected-1047 deletion completed in 6.088834541s • [SLOW TEST:12.250 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:16:44.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Jun 19 14:16:48.943: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jun 19 14:17:04.040: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:17:04.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5274" for this suite. Jun 19 14:17:10.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:17:10.138: INFO: namespace pods-5274 deletion completed in 6.089384019s • [SLOW TEST:25.278 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:17:10.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 19 14:17:10.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-1742' Jun 19 14:17:10.331: INFO: stderr: "" Jun 19 14:17:10.331: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Jun 19 14:17:15.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-1742 -o json' Jun 19 14:17:15.484: INFO: stderr: "" Jun 19 14:17:15.484: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-06-19T14:17:10Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-1742\",\n \"resourceVersion\": \"17328204\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1742/pods/e2e-test-nginx-pod\",\n \"uid\": \"db6c6de2-35c5-4779-a09e-b70b4461c557\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-8vv7c\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-8vv7c\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-8vv7c\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-19T14:17:10Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-19T14:17:13Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-19T14:17:13Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-06-19T14:17:10Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://a0601a9c1e8e48a2cc1b864c9f93006dbed85ff8b5dae32689fb6650f4413516\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-06-19T14:17:12Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.138\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-06-19T14:17:10Z\"\n }\n}\n" STEP: replace the image in the pod Jun 19 14:17:15.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1742' Jun 19 14:17:15.769: INFO: stderr: "" Jun 19 14:17:15.769: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Jun 19 14:17:15.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1742' Jun 19 14:17:18.829: INFO: stderr: "" Jun 19 14:17:18.829: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:17:18.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1742" for this suite. Jun 19 14:17:24.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:17:24.969: INFO: namespace kubectl-1742 deletion completed in 6.135379559s • [SLOW TEST:14.830 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:17:24.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2fdea8dd-ba29-4e3a-9b2a-da4546164096 STEP: Creating a pod to test consume secrets Jun 19 14:17:25.029: INFO: Waiting up to 5m0s for pod "pod-secrets-0852384b-7b95-4368-a06d-1f7c6e44764c" in namespace "secrets-7156" to be "success or failure" Jun 19 14:17:25.032: INFO: Pod "pod-secrets-0852384b-7b95-4368-a06d-1f7c6e44764c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.877404ms Jun 19 14:17:27.036: INFO: Pod "pod-secrets-0852384b-7b95-4368-a06d-1f7c6e44764c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006399545s Jun 19 14:17:29.040: INFO: Pod "pod-secrets-0852384b-7b95-4368-a06d-1f7c6e44764c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010449292s STEP: Saw pod success Jun 19 14:17:29.040: INFO: Pod "pod-secrets-0852384b-7b95-4368-a06d-1f7c6e44764c" satisfied condition "success or failure" Jun 19 14:17:29.043: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-0852384b-7b95-4368-a06d-1f7c6e44764c container secret-env-test: STEP: delete the pod Jun 19 14:17:29.076: INFO: Waiting for pod pod-secrets-0852384b-7b95-4368-a06d-1f7c6e44764c to disappear Jun 19 14:17:29.086: INFO: Pod pod-secrets-0852384b-7b95-4368-a06d-1f7c6e44764c no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:17:29.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7156" for this suite. Jun 19 14:17:35.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:17:35.178: INFO: namespace secrets-7156 deletion completed in 6.088161663s • [SLOW TEST:10.209 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:17:35.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jun 19 14:17:35.232: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jun 19 14:17:35.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5682' Jun 19 14:17:35.539: INFO: stderr: "" Jun 19 14:17:35.539: INFO: stdout: "service/redis-slave created\n" Jun 19 14:17:35.540: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jun 19 14:17:35.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5682' Jun 19 14:17:35.854: INFO: stderr: "" Jun 19 14:17:35.854: INFO: stdout: "service/redis-master created\n" Jun 19 14:17:35.855: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jun 19 14:17:35.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5682' Jun 19 14:17:36.197: INFO: stderr: "" Jun 19 14:17:36.197: INFO: stdout: "service/frontend created\n" Jun 19 14:17:36.197: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jun 19 14:17:36.197: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5682' Jun 19 14:17:36.475: INFO: stderr: "" Jun 19 14:17:36.475: INFO: stdout: "deployment.apps/frontend created\n" Jun 19 14:17:36.475: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jun 19 14:17:36.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5682' Jun 19 14:17:36.832: INFO: stderr: "" Jun 19 14:17:36.832: INFO: stdout: "deployment.apps/redis-master created\n" Jun 19 14:17:36.833: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jun 19 14:17:36.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5682' Jun 19 14:17:37.070: INFO: stderr: "" Jun 19 14:17:37.071: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jun 19 14:17:37.071: INFO: Waiting for all frontend pods to be Running. Jun 19 14:17:47.121: INFO: Waiting for frontend to serve content. Jun 19 14:17:47.173: INFO: Trying to add a new entry to the guestbook. Jun 19 14:17:47.196: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Jun 19 14:17:47.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5682' Jun 19 14:17:47.399: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 19 14:17:47.399: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jun 19 14:17:47.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5682' Jun 19 14:17:47.593: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 19 14:17:47.593: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 19 14:17:47.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5682' Jun 19 14:17:47.718: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 19 14:17:47.718: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 19 14:17:47.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5682' Jun 19 14:17:47.858: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 19 14:17:47.858: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jun 19 14:17:47.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5682' Jun 19 14:17:47.976: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 19 14:17:47.976: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jun 19 14:17:47.976: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5682' Jun 19 14:17:48.134: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jun 19 14:17:48.134: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:17:48.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5682" for this suite. Jun 19 14:18:34.180: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:18:34.259: INFO: namespace kubectl-5682 deletion completed in 46.121131255s • [SLOW TEST:59.081 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:18:34.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jun 19 14:18:38.870: INFO: Successfully updated pod "annotationupdate1f3b3527-2c39-4b72-b442-af5907679f78" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:18:40.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3306" for this suite. Jun 19 14:19:02.899: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:19:02.983: INFO: namespace projected-3306 deletion completed in 22.094213847s • [SLOW TEST:28.723 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:19:02.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:19:07.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-9775" for this suite. Jun 19 14:19:57.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:19:57.260: INFO: namespace kubelet-test-9775 deletion completed in 50.172008787s • [SLOW TEST:54.277 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:19:57.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-67cfc4f7-1f59-4ce6-a91b-5936c14654af STEP: Creating a pod to test consume secrets Jun 19 14:19:57.324: INFO: Waiting up to 5m0s for pod "pod-secrets-c949233e-a252-4a3e-9efd-ebddbc8a0f76" in namespace "secrets-1522" to be "success or failure" Jun 19 14:19:57.329: INFO: Pod "pod-secrets-c949233e-a252-4a3e-9efd-ebddbc8a0f76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.659669ms Jun 19 14:19:59.333: INFO: Pod "pod-secrets-c949233e-a252-4a3e-9efd-ebddbc8a0f76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008737459s Jun 19 14:20:01.337: INFO: Pod "pod-secrets-c949233e-a252-4a3e-9efd-ebddbc8a0f76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0130713s STEP: Saw pod success Jun 19 14:20:01.337: INFO: Pod "pod-secrets-c949233e-a252-4a3e-9efd-ebddbc8a0f76" satisfied condition "success or failure" Jun 19 14:20:01.340: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-c949233e-a252-4a3e-9efd-ebddbc8a0f76 container secret-volume-test: STEP: delete the pod Jun 19 14:20:01.378: INFO: Waiting for pod pod-secrets-c949233e-a252-4a3e-9efd-ebddbc8a0f76 to disappear Jun 19 14:20:01.434: INFO: Pod pod-secrets-c949233e-a252-4a3e-9efd-ebddbc8a0f76 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:20:01.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1522" for this suite. Jun 19 14:20:07.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:20:07.518: INFO: namespace secrets-1522 deletion completed in 6.079418643s • [SLOW TEST:10.257 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:20:07.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-716 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-716 STEP: Deleting pre-stop pod Jun 19 14:20:20.634: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:20:20.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-716" for this suite. Jun 19 14:20:58.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:20:58.806: INFO: namespace prestop-716 deletion completed in 38.162027353s • [SLOW TEST:51.288 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:20:58.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-3a1e5609-c1da-435f-b16e-6875d43fcc55 STEP: Creating a pod to test consume secrets Jun 19 14:20:58.900: INFO: Waiting up to 5m0s for pod "pod-secrets-dae93846-e964-4fde-b5de-530a837663fa" in namespace "secrets-8839" to be "success or failure" Jun 19 14:20:58.928: INFO: Pod "pod-secrets-dae93846-e964-4fde-b5de-530a837663fa": Phase="Pending", Reason="", readiness=false. Elapsed: 28.422319ms Jun 19 14:21:00.932: INFO: Pod "pod-secrets-dae93846-e964-4fde-b5de-530a837663fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032561807s Jun 19 14:21:02.936: INFO: Pod "pod-secrets-dae93846-e964-4fde-b5de-530a837663fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036566847s STEP: Saw pod success Jun 19 14:21:02.936: INFO: Pod "pod-secrets-dae93846-e964-4fde-b5de-530a837663fa" satisfied condition "success or failure" Jun 19 14:21:02.939: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-dae93846-e964-4fde-b5de-530a837663fa container secret-volume-test: STEP: delete the pod Jun 19 14:21:02.962: INFO: Waiting for pod pod-secrets-dae93846-e964-4fde-b5de-530a837663fa to disappear Jun 19 14:21:02.974: INFO: Pod pod-secrets-dae93846-e964-4fde-b5de-530a837663fa no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:21:02.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8839" for this suite. Jun 19 14:21:09.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:21:09.179: INFO: namespace secrets-8839 deletion completed in 6.202133082s • [SLOW TEST:10.372 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:21:09.179: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 19 14:21:09.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2760' Jun 19 14:21:12.023: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 19 14:21:12.023: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jun 19 14:21:14.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2760' Jun 19 14:21:14.225: INFO: stderr: "" Jun 19 14:21:14.225: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:21:14.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2760" for this suite. Jun 19 14:22:36.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:22:36.378: INFO: namespace kubectl-2760 deletion completed in 1m22.148587356s • [SLOW TEST:87.199 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:22:36.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:22:40.543: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1216" for this suite. Jun 19 14:23:22.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:23:22.642: INFO: namespace kubelet-test-1216 deletion completed in 42.094148251s • [SLOW TEST:46.264 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:23:22.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-70593b99-ac20-4b23-a314-62e23ebc2344 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:23:22.762: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5926" for this suite. Jun 19 14:23:28.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:23:28.856: INFO: namespace secrets-5926 deletion completed in 6.089270166s • [SLOW TEST:6.214 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:23:28.856: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jun 19 14:23:37.013: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 19 14:23:37.019: INFO: Pod pod-with-prestop-exec-hook still exists Jun 19 14:23:39.019: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 19 14:23:39.024: INFO: Pod pod-with-prestop-exec-hook still exists Jun 19 14:23:41.019: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 19 14:23:41.023: INFO: Pod pod-with-prestop-exec-hook still exists Jun 19 14:23:43.019: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 19 14:23:43.024: INFO: Pod pod-with-prestop-exec-hook still exists Jun 19 14:23:45.019: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 19 14:23:45.023: INFO: Pod pod-with-prestop-exec-hook still exists Jun 19 14:23:47.019: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 19 14:23:47.023: INFO: Pod pod-with-prestop-exec-hook still exists Jun 19 14:23:49.019: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 19 14:23:49.023: INFO: Pod pod-with-prestop-exec-hook still exists Jun 19 14:23:51.019: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 19 14:23:51.025: INFO: Pod pod-with-prestop-exec-hook still exists Jun 19 14:23:53.019: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 19 14:23:53.024: INFO: Pod pod-with-prestop-exec-hook still exists Jun 19 14:23:55.019: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 19 14:23:55.023: INFO: Pod pod-with-prestop-exec-hook still exists Jun 19 14:23:57.019: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 19 14:23:57.024: INFO: Pod pod-with-prestop-exec-hook still exists Jun 19 14:23:59.019: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 19 14:23:59.024: INFO: Pod pod-with-prestop-exec-hook still exists Jun 19 14:24:01.019: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 19 14:24:01.024: INFO: Pod pod-with-prestop-exec-hook still exists Jun 19 14:24:03.019: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jun 19 14:24:03.024: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:24:03.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-89" for this suite. Jun 19 14:24:25.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:24:25.124: INFO: namespace container-lifecycle-hook-89 deletion completed in 22.088327724s • [SLOW TEST:56.267 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:24:25.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 14:24:25.177: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cfa2cf49-da43-4f57-a54c-6c406f8927a6" in namespace "projected-8272" to be "success or failure" Jun 19 14:24:25.241: INFO: Pod "downwardapi-volume-cfa2cf49-da43-4f57-a54c-6c406f8927a6": Phase="Pending", Reason="", readiness=false. Elapsed: 63.491643ms Jun 19 14:24:27.245: INFO: Pod "downwardapi-volume-cfa2cf49-da43-4f57-a54c-6c406f8927a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067507813s Jun 19 14:24:29.249: INFO: Pod "downwardapi-volume-cfa2cf49-da43-4f57-a54c-6c406f8927a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.071940533s STEP: Saw pod success Jun 19 14:24:29.249: INFO: Pod "downwardapi-volume-cfa2cf49-da43-4f57-a54c-6c406f8927a6" satisfied condition "success or failure" Jun 19 14:24:29.252: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-cfa2cf49-da43-4f57-a54c-6c406f8927a6 container client-container: STEP: delete the pod Jun 19 14:24:29.267: INFO: Waiting for pod downwardapi-volume-cfa2cf49-da43-4f57-a54c-6c406f8927a6 to disappear Jun 19 14:24:29.271: INFO: Pod downwardapi-volume-cfa2cf49-da43-4f57-a54c-6c406f8927a6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:24:29.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8272" for this suite. Jun 19 14:24:35.359: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:24:35.448: INFO: namespace projected-8272 deletion completed in 6.174117779s • [SLOW TEST:10.324 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:24:35.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 14:24:35.542: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1b94e939-cc82-4057-9851-d23b46dd2f7b" in namespace "downward-api-8876" to be "success or failure" Jun 19 14:24:35.547: INFO: Pod "downwardapi-volume-1b94e939-cc82-4057-9851-d23b46dd2f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.712059ms Jun 19 14:24:37.550: INFO: Pod "downwardapi-volume-1b94e939-cc82-4057-9851-d23b46dd2f7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007784208s Jun 19 14:24:39.554: INFO: Pod "downwardapi-volume-1b94e939-cc82-4057-9851-d23b46dd2f7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011766795s STEP: Saw pod success Jun 19 14:24:39.554: INFO: Pod "downwardapi-volume-1b94e939-cc82-4057-9851-d23b46dd2f7b" satisfied condition "success or failure" Jun 19 14:24:39.557: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-1b94e939-cc82-4057-9851-d23b46dd2f7b container client-container: STEP: delete the pod Jun 19 14:24:39.579: INFO: Waiting for pod downwardapi-volume-1b94e939-cc82-4057-9851-d23b46dd2f7b to disappear Jun 19 14:24:39.583: INFO: Pod downwardapi-volume-1b94e939-cc82-4057-9851-d23b46dd2f7b no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:24:39.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8876" for this suite. Jun 19 14:24:45.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:24:45.681: INFO: namespace downward-api-8876 deletion completed in 6.095006968s • [SLOW TEST:10.232 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:24:45.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Jun 19 14:24:45.772: INFO: Waiting up to 5m0s for pod "client-containers-a6e6eda0-357d-4278-8efa-3a7ae9905953" in namespace "containers-1121" to be "success or failure" Jun 19 14:24:45.775: INFO: Pod "client-containers-a6e6eda0-357d-4278-8efa-3a7ae9905953": Phase="Pending", Reason="", readiness=false. Elapsed: 3.036501ms Jun 19 14:24:47.778: INFO: Pod "client-containers-a6e6eda0-357d-4278-8efa-3a7ae9905953": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006426226s Jun 19 14:24:49.783: INFO: Pod "client-containers-a6e6eda0-357d-4278-8efa-3a7ae9905953": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010839926s STEP: Saw pod success Jun 19 14:24:49.783: INFO: Pod "client-containers-a6e6eda0-357d-4278-8efa-3a7ae9905953" satisfied condition "success or failure" Jun 19 14:24:49.786: INFO: Trying to get logs from node iruya-worker pod client-containers-a6e6eda0-357d-4278-8efa-3a7ae9905953 container test-container: STEP: delete the pod Jun 19 14:24:49.825: INFO: Waiting for pod client-containers-a6e6eda0-357d-4278-8efa-3a7ae9905953 to disappear Jun 19 14:24:49.829: INFO: Pod client-containers-a6e6eda0-357d-4278-8efa-3a7ae9905953 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:24:49.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1121" for this suite. Jun 19 14:24:55.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:24:55.928: INFO: namespace containers-1121 deletion completed in 6.09573031s • [SLOW TEST:10.246 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:24:55.929: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Jun 19 14:25:00.068: INFO: Pod pod-hostip-0b2fab31-7242-4092-b5b5-69908808c51e has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:25:00.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9516" for this suite. Jun 19 14:25:22.084: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:25:22.160: INFO: namespace pods-9516 deletion completed in 22.088262652s • [SLOW TEST:26.230 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:25:22.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-dea388cd-bfc7-4ead-bbcf-0f663f8adf65 STEP: Creating a pod to test consume configMaps Jun 19 14:25:22.233: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d176b1cc-f272-4332-b1bd-298a79267224" in namespace "projected-5296" to be "success or failure" Jun 19 14:25:22.237: INFO: Pod "pod-projected-configmaps-d176b1cc-f272-4332-b1bd-298a79267224": Phase="Pending", Reason="", readiness=false. Elapsed: 3.361991ms Jun 19 14:25:24.241: INFO: Pod "pod-projected-configmaps-d176b1cc-f272-4332-b1bd-298a79267224": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007782988s Jun 19 14:25:26.246: INFO: Pod "pod-projected-configmaps-d176b1cc-f272-4332-b1bd-298a79267224": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012234401s STEP: Saw pod success Jun 19 14:25:26.246: INFO: Pod "pod-projected-configmaps-d176b1cc-f272-4332-b1bd-298a79267224" satisfied condition "success or failure" Jun 19 14:25:26.248: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-d176b1cc-f272-4332-b1bd-298a79267224 container projected-configmap-volume-test: STEP: delete the pod Jun 19 14:25:26.280: INFO: Waiting for pod pod-projected-configmaps-d176b1cc-f272-4332-b1bd-298a79267224 to disappear Jun 19 14:25:26.284: INFO: Pod pod-projected-configmaps-d176b1cc-f272-4332-b1bd-298a79267224 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:25:26.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5296" for this suite. Jun 19 14:25:32.319: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:25:32.398: INFO: namespace projected-5296 deletion completed in 6.111034923s • [SLOW TEST:10.238 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:25:32.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Jun 19 14:25:32.526: INFO: Waiting up to 5m0s for pod "pod-5c0edf01-1b54-494c-b879-f29d39c15b1b" in namespace "emptydir-4106" to be "success or failure" Jun 19 14:25:32.531: INFO: Pod "pod-5c0edf01-1b54-494c-b879-f29d39c15b1b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.966591ms Jun 19 14:25:34.536: INFO: Pod "pod-5c0edf01-1b54-494c-b879-f29d39c15b1b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010154851s Jun 19 14:25:36.540: INFO: Pod "pod-5c0edf01-1b54-494c-b879-f29d39c15b1b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01489097s STEP: Saw pod success Jun 19 14:25:36.541: INFO: Pod "pod-5c0edf01-1b54-494c-b879-f29d39c15b1b" satisfied condition "success or failure" Jun 19 14:25:36.544: INFO: Trying to get logs from node iruya-worker2 pod pod-5c0edf01-1b54-494c-b879-f29d39c15b1b container test-container: STEP: delete the pod Jun 19 14:25:36.563: INFO: Waiting for pod pod-5c0edf01-1b54-494c-b879-f29d39c15b1b to disappear Jun 19 14:25:36.566: INFO: Pod pod-5c0edf01-1b54-494c-b879-f29d39c15b1b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:25:36.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4106" for this suite. Jun 19 14:25:42.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:25:42.664: INFO: namespace emptydir-4106 deletion completed in 6.094731731s • [SLOW TEST:10.265 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:25:42.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jun 19 14:25:42.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 14:25:42.746: INFO: Number of nodes with available pods: 0 Jun 19 14:25:42.746: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:25:43.750: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 14:25:43.753: INFO: Number of nodes with available pods: 0 Jun 19 14:25:43.753: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:25:44.752: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 14:25:44.756: INFO: Number of nodes with available pods: 0 Jun 19 14:25:44.756: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:25:45.787: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 14:25:45.790: INFO: Number of nodes with available pods: 0 Jun 19 14:25:45.790: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:25:46.751: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 14:25:46.756: INFO: Number of nodes with available pods: 0 Jun 19 14:25:46.756: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:25:47.752: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 14:25:47.756: INFO: Number of nodes with available pods: 2 Jun 19 14:25:47.756: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Jun 19 14:25:47.783: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 14:25:47.786: INFO: Number of nodes with available pods: 1 Jun 19 14:25:47.786: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:25:48.790: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 14:25:48.794: INFO: Number of nodes with available pods: 1 Jun 19 14:25:48.794: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:25:49.793: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 14:25:49.797: INFO: Number of nodes with available pods: 1 Jun 19 14:25:49.797: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:25:50.791: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 14:25:50.795: INFO: Number of nodes with available pods: 1 Jun 19 14:25:50.795: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:25:51.791: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 14:25:51.795: INFO: Number of nodes with available pods: 1 Jun 19 14:25:51.795: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:25:52.795: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 14:25:52.798: INFO: Number of nodes with available pods: 1 Jun 19 14:25:52.798: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:25:53.791: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 14:25:53.794: INFO: Number of nodes with available pods: 1 Jun 19 14:25:53.794: INFO: Node iruya-worker is running more than one daemon pod Jun 19 14:25:54.791: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Jun 19 14:25:54.794: INFO: Number of nodes with available pods: 2 Jun 19 14:25:54.794: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3187, will wait for the garbage collector to delete the pods Jun 19 14:25:54.858: INFO: Deleting DaemonSet.extensions daemon-set took: 7.257585ms Jun 19 14:25:55.158: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.230472ms Jun 19 14:25:58.974: INFO: Number of nodes with available pods: 0 Jun 19 14:25:58.974: INFO: Number of running nodes: 0, number of available pods: 0 Jun 19 14:25:58.978: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3187/daemonsets","resourceVersion":"17329916"},"items":null} Jun 19 14:25:58.980: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3187/pods","resourceVersion":"17329916"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:25:59.007: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3187" for this suite. Jun 19 14:26:05.026: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:26:05.099: INFO: namespace daemonsets-3187 deletion completed in 6.088842389s • [SLOW TEST:22.434 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:26:05.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-139.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-139.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-139.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-139.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-139.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-139.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 19 14:26:11.217: INFO: DNS probes using dns-139/dns-test-659b45c9-de86-4645-8536-5ae497c25cfb succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:26:11.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-139" for this suite. Jun 19 14:26:17.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:26:17.369: INFO: namespace dns-139 deletion completed in 6.10754263s • [SLOW TEST:12.270 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:26:17.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jun 19 14:26:17.408: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jun 19 14:26:17.443: INFO: Waiting for terminating namespaces to be deleted... Jun 19 14:26:17.446: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Jun 19 14:26:17.451: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 19 14:26:17.451: INFO: Container kube-proxy ready: true, restart count 0 Jun 19 14:26:17.451: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Jun 19 14:26:17.451: INFO: Container kindnet-cni ready: true, restart count 2 Jun 19 14:26:17.451: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Jun 19 14:26:17.456: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Jun 19 14:26:17.456: INFO: Container kube-proxy ready: true, restart count 0 Jun 19 14:26:17.456: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Jun 19 14:26:17.456: INFO: Container kindnet-cni ready: true, restart count 2 Jun 19 14:26:17.456: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Jun 19 14:26:17.456: INFO: Container coredns ready: true, restart count 0 Jun 19 14:26:17.456: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Jun 19 14:26:17.456: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e84f51a7-c489-46cb-86ae-8fbadccb8007 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-e84f51a7-c489-46cb-86ae-8fbadccb8007 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-e84f51a7-c489-46cb-86ae-8fbadccb8007 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:26:25.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9064" for this suite. Jun 19 14:26:43.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:26:43.695: INFO: namespace sched-pred-9064 deletion completed in 18.09901241s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:26.325 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:26:43.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 19 14:26:43.810: INFO: Waiting up to 5m0s for pod "downward-api-55f0402d-688d-4d2b-b853-2855bdd52e6f" in namespace "downward-api-9130" to be "success or failure" Jun 19 14:26:43.831: INFO: Pod "downward-api-55f0402d-688d-4d2b-b853-2855bdd52e6f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.712983ms Jun 19 14:26:45.836: INFO: Pod "downward-api-55f0402d-688d-4d2b-b853-2855bdd52e6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026062293s Jun 19 14:26:47.839: INFO: Pod "downward-api-55f0402d-688d-4d2b-b853-2855bdd52e6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029712893s STEP: Saw pod success Jun 19 14:26:47.839: INFO: Pod "downward-api-55f0402d-688d-4d2b-b853-2855bdd52e6f" satisfied condition "success or failure" Jun 19 14:26:47.842: INFO: Trying to get logs from node iruya-worker2 pod downward-api-55f0402d-688d-4d2b-b853-2855bdd52e6f container dapi-container: STEP: delete the pod Jun 19 14:26:47.863: INFO: Waiting for pod downward-api-55f0402d-688d-4d2b-b853-2855bdd52e6f to disappear Jun 19 14:26:47.867: INFO: Pod downward-api-55f0402d-688d-4d2b-b853-2855bdd52e6f no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:26:47.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9130" for this suite. Jun 19 14:26:53.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:26:53.963: INFO: namespace downward-api-9130 deletion completed in 6.091468768s • [SLOW TEST:10.268 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:26:53.964: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 19 14:26:54.032: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:26:59.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9121" for this suite. Jun 19 14:27:05.986: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:27:06.064: INFO: namespace init-container-9121 deletion completed in 6.093532502s • [SLOW TEST:12.100 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:27:06.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 14:27:06.158: INFO: Waiting up to 5m0s for pod "downwardapi-volume-96f33837-7236-461a-a354-4d31c1d33dff" in namespace "projected-4699" to be "success or failure" Jun 19 14:27:06.162: INFO: Pod "downwardapi-volume-96f33837-7236-461a-a354-4d31c1d33dff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.291405ms Jun 19 14:27:08.167: INFO: Pod "downwardapi-volume-96f33837-7236-461a-a354-4d31c1d33dff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009026995s Jun 19 14:27:10.171: INFO: Pod "downwardapi-volume-96f33837-7236-461a-a354-4d31c1d33dff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013477284s STEP: Saw pod success Jun 19 14:27:10.171: INFO: Pod "downwardapi-volume-96f33837-7236-461a-a354-4d31c1d33dff" satisfied condition "success or failure" Jun 19 14:27:10.174: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-96f33837-7236-461a-a354-4d31c1d33dff container client-container: STEP: delete the pod Jun 19 14:27:10.352: INFO: Waiting for pod downwardapi-volume-96f33837-7236-461a-a354-4d31c1d33dff to disappear Jun 19 14:27:10.400: INFO: Pod downwardapi-volume-96f33837-7236-461a-a354-4d31c1d33dff no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:27:10.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4699" for this suite. Jun 19 14:27:16.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:27:16.665: INFO: namespace projected-4699 deletion completed in 6.259689517s • [SLOW TEST:10.600 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:27:16.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6469.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6469.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6469.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6469.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6469.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 14.143.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.143.14_udp@PTR;check="$$(dig +tcp +noall +answer +search 14.143.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.143.14_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6469.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6469.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6469.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6469.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6469.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6469.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6469.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 14.143.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.143.14_udp@PTR;check="$$(dig +tcp +noall +answer +search 14.143.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.143.14_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jun 19 14:27:22.923: INFO: Unable to read wheezy_udp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:22.926: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:22.930: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:22.933: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:22.957: INFO: Unable to read jessie_udp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:22.960: INFO: Unable to read jessie_tcp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:22.964: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:22.967: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:22.988: INFO: Lookups using dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d failed for: [wheezy_udp@dns-test-service.dns-6469.svc.cluster.local wheezy_tcp@dns-test-service.dns-6469.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local jessie_udp@dns-test-service.dns-6469.svc.cluster.local jessie_tcp@dns-test-service.dns-6469.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local] Jun 19 14:27:27.993: INFO: Unable to read wheezy_udp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:27.998: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:28.002: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:28.005: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:28.022: INFO: Unable to read jessie_udp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:28.025: INFO: Unable to read jessie_tcp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:28.028: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:28.031: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:28.051: INFO: Lookups using dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d failed for: [wheezy_udp@dns-test-service.dns-6469.svc.cluster.local wheezy_tcp@dns-test-service.dns-6469.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local jessie_udp@dns-test-service.dns-6469.svc.cluster.local jessie_tcp@dns-test-service.dns-6469.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local] Jun 19 14:27:32.992: INFO: Unable to read wheezy_udp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:32.995: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:32.998: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:33.001: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:33.023: INFO: Unable to read jessie_udp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:33.026: INFO: Unable to read jessie_tcp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:33.029: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:33.032: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:33.052: INFO: Lookups using dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d failed for: [wheezy_udp@dns-test-service.dns-6469.svc.cluster.local wheezy_tcp@dns-test-service.dns-6469.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local jessie_udp@dns-test-service.dns-6469.svc.cluster.local jessie_tcp@dns-test-service.dns-6469.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local] Jun 19 14:27:37.993: INFO: Unable to read wheezy_udp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:37.997: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:38.001: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:38.005: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:38.029: INFO: Unable to read jessie_udp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:38.032: INFO: Unable to read jessie_tcp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:38.035: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:38.037: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:38.052: INFO: Lookups using dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d failed for: [wheezy_udp@dns-test-service.dns-6469.svc.cluster.local wheezy_tcp@dns-test-service.dns-6469.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local jessie_udp@dns-test-service.dns-6469.svc.cluster.local jessie_tcp@dns-test-service.dns-6469.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local] Jun 19 14:27:43.011: INFO: Unable to read wheezy_udp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:43.015: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:43.019: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:43.022: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:43.045: INFO: Unable to read jessie_udp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:43.048: INFO: Unable to read jessie_tcp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:43.051: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:43.055: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:43.075: INFO: Lookups using dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d failed for: [wheezy_udp@dns-test-service.dns-6469.svc.cluster.local wheezy_tcp@dns-test-service.dns-6469.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local jessie_udp@dns-test-service.dns-6469.svc.cluster.local jessie_tcp@dns-test-service.dns-6469.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local] Jun 19 14:27:47.994: INFO: Unable to read wheezy_udp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:47.998: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:48.002: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:48.005: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:48.028: INFO: Unable to read jessie_udp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:48.030: INFO: Unable to read jessie_tcp@dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:48.033: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:48.036: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local from pod dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d: the server could not find the requested resource (get pods dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d) Jun 19 14:27:48.055: INFO: Lookups using dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d failed for: [wheezy_udp@dns-test-service.dns-6469.svc.cluster.local wheezy_tcp@dns-test-service.dns-6469.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local jessie_udp@dns-test-service.dns-6469.svc.cluster.local jessie_tcp@dns-test-service.dns-6469.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6469.svc.cluster.local] Jun 19 14:27:53.072: INFO: DNS probes using dns-6469/dns-test-c445c96c-7d38-40c8-8cd4-48cfa019767d succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:27:53.416: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6469" for this suite. Jun 19 14:27:59.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:27:59.581: INFO: namespace dns-6469 deletion completed in 6.153727123s • [SLOW TEST:42.916 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:27:59.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Jun 19 14:27:59.642: INFO: Waiting up to 5m0s for pod "var-expansion-63bf16f2-03c8-4ef5-a5e3-ed73926a87be" in namespace "var-expansion-6569" to be "success or failure" Jun 19 14:27:59.648: INFO: Pod "var-expansion-63bf16f2-03c8-4ef5-a5e3-ed73926a87be": Phase="Pending", Reason="", readiness=false. Elapsed: 5.651541ms Jun 19 14:28:01.672: INFO: Pod "var-expansion-63bf16f2-03c8-4ef5-a5e3-ed73926a87be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029003238s Jun 19 14:28:03.677: INFO: Pod "var-expansion-63bf16f2-03c8-4ef5-a5e3-ed73926a87be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034022242s STEP: Saw pod success Jun 19 14:28:03.677: INFO: Pod "var-expansion-63bf16f2-03c8-4ef5-a5e3-ed73926a87be" satisfied condition "success or failure" Jun 19 14:28:03.679: INFO: Trying to get logs from node iruya-worker pod var-expansion-63bf16f2-03c8-4ef5-a5e3-ed73926a87be container dapi-container: STEP: delete the pod Jun 19 14:28:03.718: INFO: Waiting for pod var-expansion-63bf16f2-03c8-4ef5-a5e3-ed73926a87be to disappear Jun 19 14:28:03.725: INFO: Pod var-expansion-63bf16f2-03c8-4ef5-a5e3-ed73926a87be no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:28:03.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-6569" for this suite. Jun 19 14:28:09.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:28:09.827: INFO: namespace var-expansion-6569 deletion completed in 6.09852186s • [SLOW TEST:10.245 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:28:09.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 14:28:09.908: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:28:14.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6545" for this suite. Jun 19 14:28:54.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:28:54.207: INFO: namespace pods-6545 deletion completed in 40.108616363s • [SLOW TEST:44.380 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:28:54.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 14:28:54.282: INFO: Creating ReplicaSet my-hostname-basic-e585869e-be5c-4480-acef-080d906cf91c Jun 19 14:28:54.290: INFO: Pod name my-hostname-basic-e585869e-be5c-4480-acef-080d906cf91c: Found 0 pods out of 1 Jun 19 14:28:59.294: INFO: Pod name my-hostname-basic-e585869e-be5c-4480-acef-080d906cf91c: Found 1 pods out of 1 Jun 19 14:28:59.294: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-e585869e-be5c-4480-acef-080d906cf91c" is running Jun 19 14:28:59.296: INFO: Pod "my-hostname-basic-e585869e-be5c-4480-acef-080d906cf91c-kskj2" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-19 14:28:54 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-19 14:28:57 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-19 14:28:57 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-06-19 14:28:54 +0000 UTC Reason: Message:}]) Jun 19 14:28:59.296: INFO: Trying to dial the pod Jun 19 14:29:04.309: INFO: Controller my-hostname-basic-e585869e-be5c-4480-acef-080d906cf91c: Got expected result from replica 1 [my-hostname-basic-e585869e-be5c-4480-acef-080d906cf91c-kskj2]: "my-hostname-basic-e585869e-be5c-4480-acef-080d906cf91c-kskj2", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:29:04.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6317" for this suite. Jun 19 14:29:10.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:29:10.408: INFO: namespace replicaset-6317 deletion completed in 6.094185191s • [SLOW TEST:16.199 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:29:10.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:29:16.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4616" for this suite. Jun 19 14:29:22.821: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:29:22.898: INFO: namespace namespaces-4616 deletion completed in 6.106627381s STEP: Destroying namespace "nsdeletetest-3831" for this suite. Jun 19 14:29:22.900: INFO: Namespace nsdeletetest-3831 was already deleted STEP: Destroying namespace "nsdeletetest-7515" for this suite. Jun 19 14:29:28.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:29:28.991: INFO: namespace nsdeletetest-7515 deletion completed in 6.09108875s • [SLOW TEST:18.583 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:29:28.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:29:55.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8049" for this suite. Jun 19 14:30:01.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:30:01.312: INFO: namespace namespaces-8049 deletion completed in 6.088691556s STEP: Destroying namespace "nsdeletetest-5895" for this suite. Jun 19 14:30:01.314: INFO: Namespace nsdeletetest-5895 was already deleted STEP: Destroying namespace "nsdeletetest-2608" for this suite. Jun 19 14:30:07.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:30:07.405: INFO: namespace nsdeletetest-2608 deletion completed in 6.090757065s • [SLOW TEST:38.414 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:30:07.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Jun 19 14:30:07.960: INFO: Pod name wrapped-volume-race-f192a677-63aa-43f1-9799-d8ec11c64904: Found 0 pods out of 5 Jun 19 14:30:12.980: INFO: Pod name wrapped-volume-race-f192a677-63aa-43f1-9799-d8ec11c64904: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f192a677-63aa-43f1-9799-d8ec11c64904 in namespace emptydir-wrapper-683, will wait for the garbage collector to delete the pods Jun 19 14:30:25.066: INFO: Deleting ReplicationController wrapped-volume-race-f192a677-63aa-43f1-9799-d8ec11c64904 took: 7.634026ms Jun 19 14:30:25.367: INFO: Terminating ReplicationController wrapped-volume-race-f192a677-63aa-43f1-9799-d8ec11c64904 pods took: 300.348825ms STEP: Creating RC which spawns configmap-volume pods Jun 19 14:31:02.419: INFO: Pod name wrapped-volume-race-28fb49b9-10ae-4d14-8ac6-0002fb700cbb: Found 0 pods out of 5 Jun 19 14:31:07.429: INFO: Pod name wrapped-volume-race-28fb49b9-10ae-4d14-8ac6-0002fb700cbb: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-28fb49b9-10ae-4d14-8ac6-0002fb700cbb in namespace emptydir-wrapper-683, will wait for the garbage collector to delete the pods Jun 19 14:31:21.519: INFO: Deleting ReplicationController wrapped-volume-race-28fb49b9-10ae-4d14-8ac6-0002fb700cbb took: 18.134487ms Jun 19 14:31:21.920: INFO: Terminating ReplicationController wrapped-volume-race-28fb49b9-10ae-4d14-8ac6-0002fb700cbb pods took: 400.289526ms STEP: Creating RC which spawns configmap-volume pods Jun 19 14:31:59.464: INFO: Pod name wrapped-volume-race-0fa9e7f7-6d6f-420c-8aa3-f520b25be86c: Found 0 pods out of 5 Jun 19 14:32:04.472: INFO: Pod name wrapped-volume-race-0fa9e7f7-6d6f-420c-8aa3-f520b25be86c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0fa9e7f7-6d6f-420c-8aa3-f520b25be86c in namespace emptydir-wrapper-683, will wait for the garbage collector to delete the pods Jun 19 14:32:18.551: INFO: Deleting ReplicationController wrapped-volume-race-0fa9e7f7-6d6f-420c-8aa3-f520b25be86c took: 7.392816ms Jun 19 14:32:18.851: INFO: Terminating ReplicationController wrapped-volume-race-0fa9e7f7-6d6f-420c-8aa3-f520b25be86c pods took: 300.201381ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:33:03.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-683" for this suite. Jun 19 14:33:11.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:33:11.293: INFO: namespace emptydir-wrapper-683 deletion completed in 8.11177494s • [SLOW TEST:183.888 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:33:11.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Jun 19 14:33:18.667: INFO: 0 pods remaining Jun 19 14:33:18.667: INFO: 0 pods has nil DeletionTimestamp Jun 19 14:33:18.667: INFO: Jun 19 14:33:19.152: INFO: 0 pods remaining Jun 19 14:33:19.152: INFO: 0 pods has nil DeletionTimestamp Jun 19 14:33:19.152: INFO: STEP: Gathering metrics W0619 14:33:20.302128 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 19 14:33:20.302: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:33:20.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6448" for this suite. Jun 19 14:33:26.349: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:33:26.438: INFO: namespace gc-6448 deletion completed in 6.133550982s • [SLOW TEST:15.144 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:33:26.438: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Jun 19 14:33:26.503: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:33:26.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5593" for this suite. Jun 19 14:33:32.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:33:32.687: INFO: namespace kubectl-5593 deletion completed in 6.097693994s • [SLOW TEST:6.249 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:33:32.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:33:36.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-8945" for this suite. Jun 19 14:33:42.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:33:43.021: INFO: namespace emptydir-wrapper-8945 deletion completed in 6.101417107s • [SLOW TEST:10.332 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:33:43.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 14:33:43.109: INFO: Creating deployment "nginx-deployment" Jun 19 14:33:43.122: INFO: Waiting for observed generation 1 Jun 19 14:33:45.392: INFO: Waiting for all required pods to come up Jun 19 14:33:45.396: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Jun 19 14:33:55.417: INFO: Waiting for deployment "nginx-deployment" to complete Jun 19 14:33:55.440: INFO: Updating deployment "nginx-deployment" with a non-existent image Jun 19 14:33:55.447: INFO: Updating deployment nginx-deployment Jun 19 14:33:55.447: INFO: Waiting for observed generation 2 Jun 19 14:33:57.662: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Jun 19 14:33:57.665: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Jun 19 14:33:57.667: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 19 14:33:57.673: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Jun 19 14:33:57.673: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Jun 19 14:33:57.675: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Jun 19 14:33:57.678: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Jun 19 14:33:57.678: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Jun 19 14:33:57.682: INFO: Updating deployment nginx-deployment Jun 19 14:33:57.682: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Jun 19 14:33:57.947: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Jun 19 14:33:57.984: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jun 19 14:33:58.354: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-8481,SelfLink:/apis/apps/v1/namespaces/deployment-8481/deployments/nginx-deployment,UID:3a02027c-6bef-4cb6-b2bc-0e875f91d64d,ResourceVersion:17332475,Generation:3,CreationTimestamp:2020-06-19 14:33:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-06-19 14:33:55 +0000 UTC 2020-06-19 14:33:43 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-06-19 14:33:57 +0000 UTC 2020-06-19 14:33:57 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Jun 19 14:33:58.445: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-8481,SelfLink:/apis/apps/v1/namespaces/deployment-8481/replicasets/nginx-deployment-55fb7cb77f,UID:3f6d5c6e-063e-4f16-ba19-68a9b3d78c7d,ResourceVersion:17332495,Generation:3,CreationTimestamp:2020-06-19 14:33:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 3a02027c-6bef-4cb6-b2bc-0e875f91d64d 0xc002928c37 0xc002928c38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jun 19 14:33:58.445: INFO: All old ReplicaSets of Deployment "nginx-deployment": Jun 19 14:33:58.445: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-8481,SelfLink:/apis/apps/v1/namespaces/deployment-8481/replicasets/nginx-deployment-7b8c6f4498,UID:b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432,ResourceVersion:17332490,Generation:3,CreationTimestamp:2020-06-19 14:33:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 3a02027c-6bef-4cb6-b2bc-0e875f91d64d 0xc002928d37 0xc002928d38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Jun 19 14:33:58.485: INFO: Pod "nginx-deployment-55fb7cb77f-5vzmd" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5vzmd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-55fb7cb77f-5vzmd,UID:007809d6-11fe-4761-ba5a-1a2b8dd5cd38,ResourceVersion:17332476,Generation:0,CreationTimestamp:2020-06-19 14:33:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3f6d5c6e-063e-4f16-ba19-68a9b3d78c7d 0xc002929797 0xc002929798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929810} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.485: INFO: Pod "nginx-deployment-55fb7cb77f-8q2dw" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8q2dw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-55fb7cb77f-8q2dw,UID:9c57e879-2de9-4180-904c-31aac6cc7ff3,ResourceVersion:17332424,Generation:0,CreationTimestamp:2020-06-19 14:33:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3f6d5c6e-063e-4f16-ba19-68a9b3d78c7d 0xc0029298b7 0xc0029298b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929930} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-19 14:33:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.485: INFO: Pod "nginx-deployment-55fb7cb77f-c7nr8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-c7nr8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-55fb7cb77f-c7nr8,UID:4f5a8714-44dc-4f8b-a12a-76bad31dc655,ResourceVersion:17332411,Generation:0,CreationTimestamp:2020-06-19 14:33:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3f6d5c6e-063e-4f16-ba19-68a9b3d78c7d 0xc002929a27 0xc002929a28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929aa0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929ac0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-19 14:33:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.485: INFO: Pod "nginx-deployment-55fb7cb77f-cjhkp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cjhkp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-55fb7cb77f-cjhkp,UID:9801ba64-37ac-4ada-8331-fccee7c62379,ResourceVersion:17332492,Generation:0,CreationTimestamp:2020-06-19 14:33:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3f6d5c6e-063e-4f16-ba19-68a9b3d78c7d 0xc002929b97 0xc002929b98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929c10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929c30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.485: INFO: Pod "nginx-deployment-55fb7cb77f-fm6rn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fm6rn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-55fb7cb77f-fm6rn,UID:744e8302-bfca-44c0-9e39-8117ae94755b,ResourceVersion:17332481,Generation:0,CreationTimestamp:2020-06-19 14:33:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3f6d5c6e-063e-4f16-ba19-68a9b3d78c7d 0xc002929cb7 0xc002929cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.486: INFO: Pod "nginx-deployment-55fb7cb77f-fmzw2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fmzw2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-55fb7cb77f-fmzw2,UID:8daed792-9157-4d95-bf43-ba5c01d37d7f,ResourceVersion:17332401,Generation:0,CreationTimestamp:2020-06-19 14:33:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3f6d5c6e-063e-4f16-ba19-68a9b3d78c7d 0xc002929dd7 0xc002929dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929e50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-19 14:33:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.486: INFO: Pod "nginx-deployment-55fb7cb77f-ghds2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ghds2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-55fb7cb77f-ghds2,UID:1e29a2d3-fd4b-429a-bc9b-fb5691b5c738,ResourceVersion:17332482,Generation:0,CreationTimestamp:2020-06-19 14:33:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3f6d5c6e-063e-4f16-ba19-68a9b3d78c7d 0xc002929f47 0xc002929f48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002929fc0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.486: INFO: Pod "nginx-deployment-55fb7cb77f-jxwc6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jxwc6,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-55fb7cb77f-jxwc6,UID:cd9b48a3-9aee-4c20-9207-f9ab46791ac4,ResourceVersion:17332448,Generation:0,CreationTimestamp:2020-06-19 14:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3f6d5c6e-063e-4f16-ba19-68a9b3d78c7d 0xc002b94067 0xc002b94068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b940f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b94110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.486: INFO: Pod "nginx-deployment-55fb7cb77f-n4r7c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-n4r7c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-55fb7cb77f-n4r7c,UID:50497c41-dac3-40d4-b577-243627160aed,ResourceVersion:17332503,Generation:0,CreationTimestamp:2020-06-19 14:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3f6d5c6e-063e-4f16-ba19-68a9b3d78c7d 0xc002b94197 0xc002b94198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b94210} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b94230}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-19 14:33:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.486: INFO: Pod "nginx-deployment-55fb7cb77f-pffbc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pffbc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-55fb7cb77f-pffbc,UID:5492b034-ca87-4ff0-9e27-8e1751cece5d,ResourceVersion:17332460,Generation:0,CreationTimestamp:2020-06-19 14:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3f6d5c6e-063e-4f16-ba19-68a9b3d78c7d 0xc002b94307 0xc002b94308}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b94390} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b943b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.486: INFO: Pod "nginx-deployment-55fb7cb77f-pw5qv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-pw5qv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-55fb7cb77f-pw5qv,UID:b331519b-4d50-4d19-8236-ef10698d1155,ResourceVersion:17332479,Generation:0,CreationTimestamp:2020-06-19 14:33:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3f6d5c6e-063e-4f16-ba19-68a9b3d78c7d 0xc002b94437 0xc002b94438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b944c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b944e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.486: INFO: Pod "nginx-deployment-55fb7cb77f-vzskq" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vzskq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-55fb7cb77f-vzskq,UID:aa7578ff-df73-4c16-9819-feb894826f5f,ResourceVersion:17332425,Generation:0,CreationTimestamp:2020-06-19 14:33:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3f6d5c6e-063e-4f16-ba19-68a9b3d78c7d 0xc002b94567 0xc002b94568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b945e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b94600}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-19 14:33:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.487: INFO: Pod "nginx-deployment-55fb7cb77f-zcxn9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-zcxn9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-55fb7cb77f-zcxn9,UID:5dfa093f-4623-4b77-a5d5-4aefa7b2e7bb,ResourceVersion:17332430,Generation:0,CreationTimestamp:2020-06-19 14:33:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 3f6d5c6e-063e-4f16-ba19-68a9b3d78c7d 0xc002b946d7 0xc002b946d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b94760} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b94780}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-19 14:33:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.487: INFO: Pod "nginx-deployment-7b8c6f4498-6mq7z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6mq7z,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-6mq7z,UID:15bceae3-99d2-4946-9c57-c8d9f6caa8c3,ResourceVersion:17332487,Generation:0,CreationTimestamp:2020-06-19 14:33:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b94857 0xc002b94858}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b948d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b948f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.487: INFO: Pod "nginx-deployment-7b8c6f4498-6v92v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6v92v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-6v92v,UID:ac684306-9158-4f5c-926b-71e5a76a23b3,ResourceVersion:17332488,Generation:0,CreationTimestamp:2020-06-19 14:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b94977 0xc002b94978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b949f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b94a10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-06-19 14:33:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.487: INFO: Pod "nginx-deployment-7b8c6f4498-76dsx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-76dsx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-76dsx,UID:1932fda2-2d0f-4805-8ceb-8d2718a385cd,ResourceVersion:17332346,Generation:0,CreationTimestamp:2020-06-19 14:33:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b94ad7 0xc002b94ad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b94b50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b94b80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.147,StartTime:2020-06-19 14:33:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-19 14:33:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://05dcefd5c40969a43b4512f48ef72a4f56cb3835b2171bf64709a798ddd1b253}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.487: INFO: Pod "nginx-deployment-7b8c6f4498-7nccz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7nccz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-7nccz,UID:b0b7012e-462f-4704-8134-ad2c6ebc8a6f,ResourceVersion:17332463,Generation:0,CreationTimestamp:2020-06-19 14:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b94c57 0xc002b94c58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b94cd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b94cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.487: INFO: Pod "nginx-deployment-7b8c6f4498-8pqhn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8pqhn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-8pqhn,UID:51a02afd-b078-41a0-a8ff-5a0decd8d0b9,ResourceVersion:17332366,Generation:0,CreationTimestamp:2020-06-19 14:33:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b94d77 0xc002b94d78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b94df0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b94e10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.179,StartTime:2020-06-19 14:33:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-19 14:33:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://204c2da9b3c315af98b11a0f09cd88866455f376b7a7f0f9307ddb9b32543995}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.488: INFO: Pod "nginx-deployment-7b8c6f4498-gx7z6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gx7z6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-gx7z6,UID:b023bc17-fe6d-437f-8142-640004f8a6fb,ResourceVersion:17332345,Generation:0,CreationTimestamp:2020-06-19 14:33:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b94ef7 0xc002b94ef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b94f70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b94f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.177,StartTime:2020-06-19 14:33:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-19 14:33:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://8487d050104fc226aed3a2c3c4f3dd81bcec9d9f9b1932f1bad21f2187334e18}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.488: INFO: Pod "nginx-deployment-7b8c6f4498-hfmj7" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hfmj7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-hfmj7,UID:fcecee12-c3b6-41b2-9c25-9c08ba9d54ad,ResourceVersion:17332466,Generation:0,CreationTimestamp:2020-06-19 14:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b95067 0xc002b95068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b950e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b95120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.488: INFO: Pod "nginx-deployment-7b8c6f4498-jqnd9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jqnd9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-jqnd9,UID:75089376-76d9-4348-9c14-feb0d5edf35a,ResourceVersion:17332497,Generation:0,CreationTimestamp:2020-06-19 14:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b951a7 0xc002b951a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b95220} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b95240}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:57 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-06-19 14:33:58 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.488: INFO: Pod "nginx-deployment-7b8c6f4498-jz8xs" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jz8xs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-jz8xs,UID:e1a6179c-5bcf-4792-b931-69b97c229162,ResourceVersion:17332326,Generation:0,CreationTimestamp:2020-06-19 14:33:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b95317 0xc002b95318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b953b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b953d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.175,StartTime:2020-06-19 14:33:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-19 14:33:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://229b2e54596f4f97f11f8384664d9c2c00a8761db9775274ebd7c8910baadee5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.488: INFO: Pod "nginx-deployment-7b8c6f4498-kgb26" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-kgb26,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-kgb26,UID:d644be29-ba00-4c0e-8819-9a34de557514,ResourceVersion:17332485,Generation:0,CreationTimestamp:2020-06-19 14:33:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b954a7 0xc002b954a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b95520} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b95540}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.488: INFO: Pod "nginx-deployment-7b8c6f4498-ktxhf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ktxhf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-ktxhf,UID:f4124a37-bf0e-4793-9084-03710a2f6e2e,ResourceVersion:17332467,Generation:0,CreationTimestamp:2020-06-19 14:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b955f7 0xc002b955f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b95670} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b95690}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.488: INFO: Pod "nginx-deployment-7b8c6f4498-mjsr6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mjsr6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-mjsr6,UID:5f2c969d-7de7-478d-849a-06fba6b2d3c3,ResourceVersion:17332464,Generation:0,CreationTimestamp:2020-06-19 14:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b95717 0xc002b95718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b95790} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b957b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.488: INFO: Pod "nginx-deployment-7b8c6f4498-n4ltk" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n4ltk,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-n4ltk,UID:50fa2b70-f556-4c31-a3dc-fcfa28f9f28b,ResourceVersion:17332334,Generation:0,CreationTimestamp:2020-06-19 14:33:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b95837 0xc002b95838}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b958b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b958d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.146,StartTime:2020-06-19 14:33:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-19 14:33:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e7b811f413455b2907fc14a83f314d82cde789686f5ec6fe79412de8da221044}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.489: INFO: Pod "nginx-deployment-7b8c6f4498-ntzns" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ntzns,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-ntzns,UID:6f0449ab-4bf6-4a79-aba6-c54c3491e55b,ResourceVersion:17332484,Generation:0,CreationTimestamp:2020-06-19 14:33:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b959a7 0xc002b959a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b95a20} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b95a40}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.489: INFO: Pod "nginx-deployment-7b8c6f4498-s492s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s492s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-s492s,UID:59bfb973-1f40-40a4-88ce-db8f0890ceb8,ResourceVersion:17332480,Generation:0,CreationTimestamp:2020-06-19 14:33:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b95ac7 0xc002b95ac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b95b40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b95b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.489: INFO: Pod "nginx-deployment-7b8c6f4498-sc8m9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sc8m9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-sc8m9,UID:ca4ffb05-e76b-4b5d-a0ad-17a8ccd750b4,ResourceVersion:17332313,Generation:0,CreationTimestamp:2020-06-19 14:33:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b95be7 0xc002b95be8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b95c60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b95c80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.145,StartTime:2020-06-19 14:33:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-19 14:33:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://3930c4bf7f5c6be2c00828af0c194000a800a16226f528f1dbd228d5ad95e223}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.489: INFO: Pod "nginx-deployment-7b8c6f4498-skpgl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-skpgl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-skpgl,UID:9786ba68-640f-4b8d-8468-bb0c7f9da742,ResourceVersion:17332483,Generation:0,CreationTimestamp:2020-06-19 14:33:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b95d57 0xc002b95d58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b95dd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b95df0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:58 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.489: INFO: Pod "nginx-deployment-7b8c6f4498-v44lz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-v44lz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-v44lz,UID:d78cc371-6225-4877-9240-89ac078f0a5a,ResourceVersion:17332450,Generation:0,CreationTimestamp:2020-06-19 14:33:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b95e77 0xc002b95e78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002b95ef0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002b95f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:57 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.489: INFO: Pod "nginx-deployment-7b8c6f4498-wr86v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wr86v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-wr86v,UID:f4da8e23-6977-4d37-9c77-5f8b51ab27a3,ResourceVersion:17332350,Generation:0,CreationTimestamp:2020-06-19 14:33:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002b95f97 0xc002b95f98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d1c010} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d1c030}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.176,StartTime:2020-06-19 14:33:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-19 14:33:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://02fc8e763f01d50f72005048e47004148c8dd7adeb1c6e8ba12e91b94c2dcf6d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Jun 19 14:33:58.489: INFO: Pod "nginx-deployment-7b8c6f4498-xppcx" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xppcx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8481,SelfLink:/api/v1/namespaces/deployment-8481/pods/nginx-deployment-7b8c6f4498-xppcx,UID:fc2fe2a0-2b08-447e-b34f-84607335c1b0,ResourceVersion:17332368,Generation:0,CreationTimestamp:2020-06-19 14:33:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 b14d36aa-6d2a-4ee4-b0ae-ac8bdc7a6432 0xc002d1c107 0xc002d1c108}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nrnw8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nrnw8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-nrnw8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002d1c180} {node.kubernetes.io/unreachable Exists NoExecute 0xc002d1c1a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:54 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:54 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:33:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.149,StartTime:2020-06-19 14:33:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-06-19 14:33:53 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fba52c0456e3c91f2c7360bb96b7dd0120f9682e69318a469908da303edf2309}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:33:58.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8481" for this suite. Jun 19 14:34:18.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:34:18.680: INFO: namespace deployment-8481 deletion completed in 20.118477752s • [SLOW TEST:35.657 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:34:18.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 14:34:19.568: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"ef90c2a5-15af-48d9-97d5-b4e41cc08ff8", Controller:(*bool)(0xc002c24d22), BlockOwnerDeletion:(*bool)(0xc002c24d23)}} Jun 19 14:34:19.579: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9b8aa6bc-90fa-4a9d-b7e0-e0451170f7c4", Controller:(*bool)(0xc002c24f12), BlockOwnerDeletion:(*bool)(0xc002c24f13)}} Jun 19 14:34:19.867: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"84e3c4b4-7220-4a94-bfd2-578188c834b6", Controller:(*bool)(0xc00097cd72), BlockOwnerDeletion:(*bool)(0xc00097cd73)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:34:24.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9283" for this suite. Jun 19 14:34:31.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:34:31.192: INFO: namespace gc-9283 deletion completed in 6.224024956s • [SLOW TEST:12.512 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:34:31.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-gl8n STEP: Creating a pod to test atomic-volume-subpath Jun 19 14:34:31.335: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-gl8n" in namespace "subpath-2926" to be "success or failure" Jun 19 14:34:31.340: INFO: Pod "pod-subpath-test-downwardapi-gl8n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.75037ms Jun 19 14:34:33.344: INFO: Pod "pod-subpath-test-downwardapi-gl8n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008596263s Jun 19 14:34:35.349: INFO: Pod "pod-subpath-test-downwardapi-gl8n": Phase="Running", Reason="", readiness=true. Elapsed: 4.013442076s Jun 19 14:34:37.353: INFO: Pod "pod-subpath-test-downwardapi-gl8n": Phase="Running", Reason="", readiness=true. Elapsed: 6.017579807s Jun 19 14:34:39.358: INFO: Pod "pod-subpath-test-downwardapi-gl8n": Phase="Running", Reason="", readiness=true. Elapsed: 8.022655829s Jun 19 14:34:41.363: INFO: Pod "pod-subpath-test-downwardapi-gl8n": Phase="Running", Reason="", readiness=true. Elapsed: 10.027109217s Jun 19 14:34:43.367: INFO: Pod "pod-subpath-test-downwardapi-gl8n": Phase="Running", Reason="", readiness=true. Elapsed: 12.031598769s Jun 19 14:34:45.371: INFO: Pod "pod-subpath-test-downwardapi-gl8n": Phase="Running", Reason="", readiness=true. Elapsed: 14.035224528s Jun 19 14:34:47.375: INFO: Pod "pod-subpath-test-downwardapi-gl8n": Phase="Running", Reason="", readiness=true. Elapsed: 16.039380777s Jun 19 14:34:49.380: INFO: Pod "pod-subpath-test-downwardapi-gl8n": Phase="Running", Reason="", readiness=true. Elapsed: 18.044056829s Jun 19 14:34:51.384: INFO: Pod "pod-subpath-test-downwardapi-gl8n": Phase="Running", Reason="", readiness=true. Elapsed: 20.048443535s Jun 19 14:34:53.388: INFO: Pod "pod-subpath-test-downwardapi-gl8n": Phase="Running", Reason="", readiness=true. Elapsed: 22.052222593s Jun 19 14:34:55.392: INFO: Pod "pod-subpath-test-downwardapi-gl8n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.057023992s STEP: Saw pod success Jun 19 14:34:55.393: INFO: Pod "pod-subpath-test-downwardapi-gl8n" satisfied condition "success or failure" Jun 19 14:34:55.396: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-gl8n container test-container-subpath-downwardapi-gl8n: STEP: delete the pod Jun 19 14:34:55.428: INFO: Waiting for pod pod-subpath-test-downwardapi-gl8n to disappear Jun 19 14:34:55.432: INFO: Pod pod-subpath-test-downwardapi-gl8n no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-gl8n Jun 19 14:34:55.432: INFO: Deleting pod "pod-subpath-test-downwardapi-gl8n" in namespace "subpath-2926" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:34:55.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2926" for this suite. Jun 19 14:35:01.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:35:01.539: INFO: namespace subpath-2926 deletion completed in 6.080060146s • [SLOW TEST:30.347 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:35:01.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-d73702ed-8c9a-4f70-9b0f-2d05b89cff1c STEP: Creating a pod to test consume configMaps Jun 19 14:35:01.671: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f3d052e1-8b28-4473-b1cf-7a4abd4dc945" in namespace "projected-4096" to be "success or failure" Jun 19 14:35:01.692: INFO: Pod "pod-projected-configmaps-f3d052e1-8b28-4473-b1cf-7a4abd4dc945": Phase="Pending", Reason="", readiness=false. Elapsed: 20.379213ms Jun 19 14:35:03.765: INFO: Pod "pod-projected-configmaps-f3d052e1-8b28-4473-b1cf-7a4abd4dc945": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093801737s Jun 19 14:35:05.769: INFO: Pod "pod-projected-configmaps-f3d052e1-8b28-4473-b1cf-7a4abd4dc945": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.098070161s STEP: Saw pod success Jun 19 14:35:05.769: INFO: Pod "pod-projected-configmaps-f3d052e1-8b28-4473-b1cf-7a4abd4dc945" satisfied condition "success or failure" Jun 19 14:35:05.772: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-f3d052e1-8b28-4473-b1cf-7a4abd4dc945 container projected-configmap-volume-test: STEP: delete the pod Jun 19 14:35:05.802: INFO: Waiting for pod pod-projected-configmaps-f3d052e1-8b28-4473-b1cf-7a4abd4dc945 to disappear Jun 19 14:35:05.809: INFO: Pod pod-projected-configmaps-f3d052e1-8b28-4473-b1cf-7a4abd4dc945 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:35:05.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4096" for this suite. Jun 19 14:35:11.824: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:35:11.900: INFO: namespace projected-4096 deletion completed in 6.088719245s • [SLOW TEST:10.361 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:35:11.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Jun 19 14:35:11.972: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Jun 19 14:35:12.438: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Jun 19 14:35:14.783: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728174112, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728174112, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728174112, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728174112, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 19 14:35:16.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728174112, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728174112, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63728174112, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728174112, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Jun 19 14:35:19.521: INFO: Waited 726.12828ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:35:20.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4065" for this suite. Jun 19 14:35:26.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:35:26.241: INFO: namespace aggregator-4065 deletion completed in 6.095726938s • [SLOW TEST:14.340 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:35:26.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jun 19 14:35:26.297: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1453' Jun 19 14:35:29.159: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jun 19 14:35:29.160: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jun 19 14:35:29.187: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-84zfw] Jun 19 14:35:29.187: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-84zfw" in namespace "kubectl-1453" to be "running and ready" Jun 19 14:35:29.207: INFO: Pod "e2e-test-nginx-rc-84zfw": Phase="Pending", Reason="", readiness=false. Elapsed: 20.289619ms Jun 19 14:35:31.228: INFO: Pod "e2e-test-nginx-rc-84zfw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041759891s Jun 19 14:35:33.232: INFO: Pod "e2e-test-nginx-rc-84zfw": Phase="Running", Reason="", readiness=true. Elapsed: 4.045240165s Jun 19 14:35:33.232: INFO: Pod "e2e-test-nginx-rc-84zfw" satisfied condition "running and ready" Jun 19 14:35:33.232: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-84zfw] Jun 19 14:35:33.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-1453' Jun 19 14:35:33.352: INFO: stderr: "" Jun 19 14:35:33.352: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jun 19 14:35:33.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1453' Jun 19 14:35:33.475: INFO: stderr: "" Jun 19 14:35:33.475: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:35:33.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1453" for this suite. Jun 19 14:35:55.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:35:55.572: INFO: namespace kubectl-1453 deletion completed in 22.093971364s • [SLOW TEST:29.331 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:35:55.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Jun 19 14:35:55.640: INFO: Waiting up to 5m0s for pod "client-containers-6cd3ae50-c968-46fa-aa3c-35ede61719ec" in namespace "containers-986" to be "success or failure" Jun 19 14:35:55.643: INFO: Pod "client-containers-6cd3ae50-c968-46fa-aa3c-35ede61719ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.302324ms Jun 19 14:35:57.755: INFO: Pod "client-containers-6cd3ae50-c968-46fa-aa3c-35ede61719ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114839182s Jun 19 14:35:59.760: INFO: Pod "client-containers-6cd3ae50-c968-46fa-aa3c-35ede61719ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.11993577s STEP: Saw pod success Jun 19 14:35:59.760: INFO: Pod "client-containers-6cd3ae50-c968-46fa-aa3c-35ede61719ec" satisfied condition "success or failure" Jun 19 14:35:59.763: INFO: Trying to get logs from node iruya-worker2 pod client-containers-6cd3ae50-c968-46fa-aa3c-35ede61719ec container test-container: STEP: delete the pod Jun 19 14:35:59.792: INFO: Waiting for pod client-containers-6cd3ae50-c968-46fa-aa3c-35ede61719ec to disappear Jun 19 14:35:59.805: INFO: Pod client-containers-6cd3ae50-c968-46fa-aa3c-35ede61719ec no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:35:59.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-986" for this suite. Jun 19 14:36:05.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:36:05.889: INFO: namespace containers-986 deletion completed in 6.079880409s • [SLOW TEST:10.316 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:36:05.889: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 14:36:05.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9693' Jun 19 14:36:06.295: INFO: stderr: "" Jun 19 14:36:06.295: INFO: stdout: "replicationcontroller/redis-master created\n" Jun 19 14:36:06.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9693' Jun 19 14:36:06.658: INFO: stderr: "" Jun 19 14:36:06.658: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jun 19 14:36:07.663: INFO: Selector matched 1 pods for map[app:redis] Jun 19 14:36:07.663: INFO: Found 0 / 1 Jun 19 14:36:08.662: INFO: Selector matched 1 pods for map[app:redis] Jun 19 14:36:08.662: INFO: Found 0 / 1 Jun 19 14:36:09.663: INFO: Selector matched 1 pods for map[app:redis] Jun 19 14:36:09.663: INFO: Found 1 / 1 Jun 19 14:36:09.663: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jun 19 14:36:09.667: INFO: Selector matched 1 pods for map[app:redis] Jun 19 14:36:09.667: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jun 19 14:36:09.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-s6qcf --namespace=kubectl-9693' Jun 19 14:36:09.781: INFO: stderr: "" Jun 19 14:36:09.781: INFO: stdout: "Name: redis-master-s6qcf\nNamespace: kubectl-9693\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Fri, 19 Jun 2020 14:36:06 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.166\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://a8928aa058845ff32908fb4c00b771f26d7f246bf67aac2cf95614bef6621434\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 19 Jun 2020 14:36:09 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-lsbp4 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-lsbp4:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-lsbp4\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-9693/redis-master-s6qcf to iruya-worker2\n Normal Pulled 2s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker2 Created container redis-master\n Normal Started 0s kubelet, iruya-worker2 Started container redis-master\n" Jun 19 14:36:09.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9693' Jun 19 14:36:09.893: INFO: stderr: "" Jun 19 14:36:09.893: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9693\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: redis-master-s6qcf\n" Jun 19 14:36:09.893: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9693' Jun 19 14:36:09.990: INFO: stderr: "" Jun 19 14:36:09.990: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9693\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.101.58.8\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.166:6379\nSession Affinity: None\nEvents: \n" Jun 19 14:36:09.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Jun 19 14:36:10.115: INFO: stderr: "" Jun 19 14:36:10.115: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 19 Jun 2020 14:35:28 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 19 Jun 2020 14:35:28 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 19 Jun 2020 14:35:28 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 19 Jun 2020 14:35:28 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 95d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 95d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 95d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 95d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 95d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 95d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 95d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jun 19 14:36:10.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9693' Jun 19 14:36:10.214: INFO: stderr: "" Jun 19 14:36:10.214: INFO: stdout: "Name: kubectl-9693\nLabels: e2e-framework=kubectl\n e2e-run=5d4a0555-759e-4286-b1eb-4cf6f98383c4\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:36:10.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9693" for this suite. Jun 19 14:36:32.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:36:32.357: INFO: namespace kubectl-9693 deletion completed in 22.139773633s • [SLOW TEST:26.468 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:36:32.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Jun 19 14:36:32.448: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6514,SelfLink:/api/v1/namespaces/watch-6514/configmaps/e2e-watch-test-resource-version,UID:46a90d7a-f671-4e82-a08d-85971b0b2ee8,ResourceVersion:17333323,Generation:0,CreationTimestamp:2020-06-19 14:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 19 14:36:32.449: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6514,SelfLink:/api/v1/namespaces/watch-6514/configmaps/e2e-watch-test-resource-version,UID:46a90d7a-f671-4e82-a08d-85971b0b2ee8,ResourceVersion:17333324,Generation:0,CreationTimestamp:2020-06-19 14:36:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:36:32.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6514" for this suite. Jun 19 14:36:38.473: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:36:38.560: INFO: namespace watch-6514 deletion completed in 6.107075299s • [SLOW TEST:6.202 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:36:38.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0619 14:36:39.441395 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 19 14:36:39.441: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:36:39.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3413" for this suite. Jun 19 14:36:45.461: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:36:45.584: INFO: namespace gc-3413 deletion completed in 6.139404117s • [SLOW TEST:7.024 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:36:45.585: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0619 14:36:57.575066 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jun 19 14:36:57.575: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:36:57.575: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1287" for this suite. Jun 19 14:37:05.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:37:05.687: INFO: namespace gc-1287 deletion completed in 8.109924492s • [SLOW TEST:20.103 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:37:05.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-7841/configmap-test-77544511-a2fb-4444-afcd-57e28e59a726 STEP: Creating a pod to test consume configMaps Jun 19 14:37:05.761: INFO: Waiting up to 5m0s for pod "pod-configmaps-83cb71c9-b50d-4f9d-8799-bea0f1ee922d" in namespace "configmap-7841" to be "success or failure" Jun 19 14:37:05.771: INFO: Pod "pod-configmaps-83cb71c9-b50d-4f9d-8799-bea0f1ee922d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.641408ms Jun 19 14:37:07.775: INFO: Pod "pod-configmaps-83cb71c9-b50d-4f9d-8799-bea0f1ee922d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014155675s Jun 19 14:37:09.779: INFO: Pod "pod-configmaps-83cb71c9-b50d-4f9d-8799-bea0f1ee922d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018106803s STEP: Saw pod success Jun 19 14:37:09.779: INFO: Pod "pod-configmaps-83cb71c9-b50d-4f9d-8799-bea0f1ee922d" satisfied condition "success or failure" Jun 19 14:37:09.782: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-83cb71c9-b50d-4f9d-8799-bea0f1ee922d container env-test: STEP: delete the pod Jun 19 14:37:09.809: INFO: Waiting for pod pod-configmaps-83cb71c9-b50d-4f9d-8799-bea0f1ee922d to disappear Jun 19 14:37:09.833: INFO: Pod pod-configmaps-83cb71c9-b50d-4f9d-8799-bea0f1ee922d no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:37:09.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7841" for this suite. Jun 19 14:37:15.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:37:15.951: INFO: namespace configmap-7841 deletion completed in 6.105171571s • [SLOW TEST:10.264 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:37:15.952: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Jun 19 14:37:16.013: INFO: Waiting up to 5m0s for pod "downward-api-d6c695ea-e4e6-4812-8218-e0e0c6105dba" in namespace "downward-api-4435" to be "success or failure" Jun 19 14:37:16.016: INFO: Pod "downward-api-d6c695ea-e4e6-4812-8218-e0e0c6105dba": Phase="Pending", Reason="", readiness=false. Elapsed: 3.108867ms Jun 19 14:37:18.060: INFO: Pod "downward-api-d6c695ea-e4e6-4812-8218-e0e0c6105dba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046789313s Jun 19 14:37:20.065: INFO: Pod "downward-api-d6c695ea-e4e6-4812-8218-e0e0c6105dba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051544671s STEP: Saw pod success Jun 19 14:37:20.065: INFO: Pod "downward-api-d6c695ea-e4e6-4812-8218-e0e0c6105dba" satisfied condition "success or failure" Jun 19 14:37:20.068: INFO: Trying to get logs from node iruya-worker2 pod downward-api-d6c695ea-e4e6-4812-8218-e0e0c6105dba container dapi-container: STEP: delete the pod Jun 19 14:37:20.126: INFO: Waiting for pod downward-api-d6c695ea-e4e6-4812-8218-e0e0c6105dba to disappear Jun 19 14:37:20.155: INFO: Pod downward-api-d6c695ea-e4e6-4812-8218-e0e0c6105dba no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:37:20.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4435" for this suite. Jun 19 14:37:26.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:37:26.360: INFO: namespace downward-api-4435 deletion completed in 6.201212383s • [SLOW TEST:10.407 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:37:26.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3044 STEP: creating a selector STEP: Creating the service pods in kubernetes Jun 19 14:37:26.394: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jun 19 14:37:54.523: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.202 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3044 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 14:37:54.523: INFO: >>> kubeConfig: /root/.kube/config I0619 14:37:54.550121 6 log.go:172] (0xc000b50210) (0xc0017d41e0) Create stream I0619 14:37:54.550143 6 log.go:172] (0xc000b50210) (0xc0017d41e0) Stream added, broadcasting: 1 I0619 14:37:54.552048 6 log.go:172] (0xc000b50210) Reply frame received for 1 I0619 14:37:54.552093 6 log.go:172] (0xc000b50210) (0xc0018b0000) Create stream I0619 14:37:54.552116 6 log.go:172] (0xc000b50210) (0xc0018b0000) Stream added, broadcasting: 3 I0619 14:37:54.553399 6 log.go:172] (0xc000b50210) Reply frame received for 3 I0619 14:37:54.553456 6 log.go:172] (0xc000b50210) (0xc0017d4280) Create stream I0619 14:37:54.553481 6 log.go:172] (0xc000b50210) (0xc0017d4280) Stream added, broadcasting: 5 I0619 14:37:54.554605 6 log.go:172] (0xc000b50210) Reply frame received for 5 I0619 14:37:55.667618 6 log.go:172] (0xc000b50210) Data frame received for 3 I0619 14:37:55.667678 6 log.go:172] (0xc0018b0000) (3) Data frame handling I0619 14:37:55.667706 6 log.go:172] (0xc0018b0000) (3) Data frame sent I0619 14:37:55.667744 6 log.go:172] (0xc000b50210) Data frame received for 3 I0619 14:37:55.667766 6 log.go:172] (0xc0018b0000) (3) Data frame handling I0619 14:37:55.667834 6 log.go:172] (0xc000b50210) Data frame received for 5 I0619 14:37:55.667904 6 log.go:172] (0xc0017d4280) (5) Data frame handling I0619 14:37:55.669832 6 log.go:172] (0xc000b50210) Data frame received for 1 I0619 14:37:55.669920 6 log.go:172] (0xc0017d41e0) (1) Data frame handling I0619 14:37:55.669962 6 log.go:172] (0xc0017d41e0) (1) Data frame sent I0619 14:37:55.670000 6 log.go:172] (0xc000b50210) (0xc0017d41e0) Stream removed, broadcasting: 1 I0619 14:37:55.670026 6 log.go:172] (0xc000b50210) Go away received I0619 14:37:55.670214 6 log.go:172] (0xc000b50210) (0xc0017d41e0) Stream removed, broadcasting: 1 I0619 14:37:55.670247 6 log.go:172] (0xc000b50210) (0xc0018b0000) Stream removed, broadcasting: 3 I0619 14:37:55.670267 6 log.go:172] (0xc000b50210) (0xc0017d4280) Stream removed, broadcasting: 5 Jun 19 14:37:55.670: INFO: Found all expected endpoints: [netserver-0] Jun 19 14:37:55.674: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.174 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3044 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jun 19 14:37:55.674: INFO: >>> kubeConfig: /root/.kube/config I0619 14:37:55.705997 6 log.go:172] (0xc00029cf20) (0xc00352a140) Create stream I0619 14:37:55.706036 6 log.go:172] (0xc00029cf20) (0xc00352a140) Stream added, broadcasting: 1 I0619 14:37:55.711633 6 log.go:172] (0xc00029cf20) Reply frame received for 1 I0619 14:37:55.711687 6 log.go:172] (0xc00029cf20) (0xc0018b01e0) Create stream I0619 14:37:55.711703 6 log.go:172] (0xc00029cf20) (0xc0018b01e0) Stream added, broadcasting: 3 I0619 14:37:55.713058 6 log.go:172] (0xc00029cf20) Reply frame received for 3 I0619 14:37:55.713101 6 log.go:172] (0xc00029cf20) (0xc000bde1e0) Create stream I0619 14:37:55.713305 6 log.go:172] (0xc00029cf20) (0xc000bde1e0) Stream added, broadcasting: 5 I0619 14:37:55.714462 6 log.go:172] (0xc00029cf20) Reply frame received for 5 I0619 14:37:56.771959 6 log.go:172] (0xc00029cf20) Data frame received for 3 I0619 14:37:56.772094 6 log.go:172] (0xc0018b01e0) (3) Data frame handling I0619 14:37:56.772142 6 log.go:172] (0xc0018b01e0) (3) Data frame sent I0619 14:37:56.772159 6 log.go:172] (0xc00029cf20) Data frame received for 3 I0619 14:37:56.772177 6 log.go:172] (0xc0018b01e0) (3) Data frame handling I0619 14:37:56.772417 6 log.go:172] (0xc00029cf20) Data frame received for 5 I0619 14:37:56.772438 6 log.go:172] (0xc000bde1e0) (5) Data frame handling I0619 14:37:56.774719 6 log.go:172] (0xc00029cf20) Data frame received for 1 I0619 14:37:56.774754 6 log.go:172] (0xc00352a140) (1) Data frame handling I0619 14:37:56.774797 6 log.go:172] (0xc00352a140) (1) Data frame sent I0619 14:37:56.774821 6 log.go:172] (0xc00029cf20) (0xc00352a140) Stream removed, broadcasting: 1 I0619 14:37:56.774859 6 log.go:172] (0xc00029cf20) Go away received I0619 14:37:56.775107 6 log.go:172] (0xc00029cf20) (0xc00352a140) Stream removed, broadcasting: 1 I0619 14:37:56.775138 6 log.go:172] (0xc00029cf20) (0xc0018b01e0) Stream removed, broadcasting: 3 I0619 14:37:56.775164 6 log.go:172] (0xc00029cf20) (0xc000bde1e0) Stream removed, broadcasting: 5 Jun 19 14:37:56.775: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:37:56.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3044" for this suite. Jun 19 14:38:18.793: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:38:18.875: INFO: namespace pod-network-test-3044 deletion completed in 22.095137931s • [SLOW TEST:52.515 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:38:18.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Jun 19 14:38:22.972: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-4d9f6ee1-0053-480a-acbf-cb7d3a23e60b,GenerateName:,Namespace:events-2260,SelfLink:/api/v1/namespaces/events-2260/pods/send-events-4d9f6ee1-0053-480a-acbf-cb7d3a23e60b,UID:3cf67cab-d49d-4d4e-94e2-d51fce3c678a,ResourceVersion:17333893,Generation:0,CreationTimestamp:2020-06-19 14:38:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 943835642,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-l8pwm {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-l8pwm,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-l8pwm true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002928fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002929000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:38:19 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:38:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:38:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-06-19 14:38:18 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.203,StartTime:2020-06-19 14:38:19 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-06-19 14:38:21 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://b663b0d3b74bf7e5acffb730c43d2a42ea32d6eeeec648e7e89a991b20adb6cb}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Jun 19 14:38:24.977: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Jun 19 14:38:26.982: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:38:26.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2260" for this suite. Jun 19 14:39:05.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:39:05.111: INFO: namespace events-2260 deletion completed in 38.102049458s • [SLOW TEST:46.235 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:39:05.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jun 19 14:39:05.180: INFO: PodSpec: initContainers in spec.initContainers Jun 19 14:39:52.833: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-572026b3-ca71-4c1b-a8e3-af2921531f46", GenerateName:"", Namespace:"init-container-6180", SelfLink:"/api/v1/namespaces/init-container-6180/pods/pod-init-572026b3-ca71-4c1b-a8e3-af2921531f46", UID:"fadf853f-125e-451a-9f74-43b706965b2e", ResourceVersion:"17334114", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63728174345, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"180650268"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-b4tlj", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002aa4400), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b4tlj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b4tlj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-b4tlj", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001355628), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0022998c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0013556b0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0013556d0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0013556d8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0013556dc), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728174345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728174345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728174345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63728174345, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.204", StartTime:(*v1.Time)(0xc00176a020), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc00176a060), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001ba2460)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://b2009111a659d54dc697af36bdc5277faddc5dc87b118549110e4c75ece8b886"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00176a080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00176a040), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:39:52.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6180" for this suite. Jun 19 14:40:14.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:40:14.958: INFO: namespace init-container-6180 deletion completed in 22.093044419s • [SLOW TEST:69.847 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:40:14.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 14:40:15.028: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bbbd4c66-bba4-4ea4-91f8-995dd7b3ce97" in namespace "downward-api-570" to be "success or failure" Jun 19 14:40:15.038: INFO: Pod "downwardapi-volume-bbbd4c66-bba4-4ea4-91f8-995dd7b3ce97": Phase="Pending", Reason="", readiness=false. Elapsed: 9.709473ms Jun 19 14:40:17.231: INFO: Pod "downwardapi-volume-bbbd4c66-bba4-4ea4-91f8-995dd7b3ce97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.202965815s Jun 19 14:40:19.235: INFO: Pod "downwardapi-volume-bbbd4c66-bba4-4ea4-91f8-995dd7b3ce97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.206579403s STEP: Saw pod success Jun 19 14:40:19.235: INFO: Pod "downwardapi-volume-bbbd4c66-bba4-4ea4-91f8-995dd7b3ce97" satisfied condition "success or failure" Jun 19 14:40:19.237: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-bbbd4c66-bba4-4ea4-91f8-995dd7b3ce97 container client-container: STEP: delete the pod Jun 19 14:40:19.254: INFO: Waiting for pod downwardapi-volume-bbbd4c66-bba4-4ea4-91f8-995dd7b3ce97 to disappear Jun 19 14:40:19.290: INFO: Pod downwardapi-volume-bbbd4c66-bba4-4ea4-91f8-995dd7b3ce97 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:40:19.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-570" for this suite. Jun 19 14:40:25.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:40:25.386: INFO: namespace downward-api-570 deletion completed in 6.092775733s • [SLOW TEST:10.428 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:40:25.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-305c9e31-8fd1-43b6-bc12-0e498dcec913 STEP: Creating a pod to test consume configMaps Jun 19 14:40:25.465: INFO: Waiting up to 5m0s for pod "pod-configmaps-ac1db8bc-ed7a-4ba8-a7af-77590c82d00d" in namespace "configmap-2246" to be "success or failure" Jun 19 14:40:25.507: INFO: Pod "pod-configmaps-ac1db8bc-ed7a-4ba8-a7af-77590c82d00d": Phase="Pending", Reason="", readiness=false. Elapsed: 42.064609ms Jun 19 14:40:27.511: INFO: Pod "pod-configmaps-ac1db8bc-ed7a-4ba8-a7af-77590c82d00d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045347007s Jun 19 14:40:29.515: INFO: Pod "pod-configmaps-ac1db8bc-ed7a-4ba8-a7af-77590c82d00d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049949409s STEP: Saw pod success Jun 19 14:40:29.515: INFO: Pod "pod-configmaps-ac1db8bc-ed7a-4ba8-a7af-77590c82d00d" satisfied condition "success or failure" Jun 19 14:40:29.518: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-ac1db8bc-ed7a-4ba8-a7af-77590c82d00d container configmap-volume-test: STEP: delete the pod Jun 19 14:40:29.548: INFO: Waiting for pod pod-configmaps-ac1db8bc-ed7a-4ba8-a7af-77590c82d00d to disappear Jun 19 14:40:29.552: INFO: Pod pod-configmaps-ac1db8bc-ed7a-4ba8-a7af-77590c82d00d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:40:29.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2246" for this suite. Jun 19 14:40:35.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:40:35.715: INFO: namespace configmap-2246 deletion completed in 6.15564871s • [SLOW TEST:10.328 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:40:35.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:40:39.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-830" for this suite. Jun 19 14:41:25.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:41:25.956: INFO: namespace kubelet-test-830 deletion completed in 46.125784626s • [SLOW TEST:50.241 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:41:25.957: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:41:30.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1886" for this suite. Jun 19 14:41:36.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:41:36.180: INFO: namespace kubelet-test-1886 deletion completed in 6.093124107s • [SLOW TEST:10.224 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:41:36.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Jun 19 14:41:36.226: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Jun 19 14:41:36.331: INFO: stderr: "" Jun 19 14:41:36.331: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:41:36.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2000" for this suite. Jun 19 14:41:42.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:41:42.483: INFO: namespace kubectl-2000 deletion completed in 6.1487424s • [SLOW TEST:6.303 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:41:42.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Jun 19 14:41:46.561: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:41:46.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4096" for this suite. Jun 19 14:41:52.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:41:52.715: INFO: namespace container-runtime-4096 deletion completed in 6.110096875s • [SLOW TEST:10.232 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:41:52.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jun 19 14:41:52.756: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Jun 19 14:41:53.841: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:41:55.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4871" for this suite. Jun 19 14:42:01.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:42:01.231: INFO: namespace replication-controller-4871 deletion completed in 6.173734211s • [SLOW TEST:8.514 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:42:01.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jun 19 14:42:01.315: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dee9d26e-a779-4b39-8554-d9770a3b314b" in namespace "projected-7838" to be "success or failure" Jun 19 14:42:01.418: INFO: Pod "downwardapi-volume-dee9d26e-a779-4b39-8554-d9770a3b314b": Phase="Pending", Reason="", readiness=false. Elapsed: 102.969986ms Jun 19 14:42:03.423: INFO: Pod "downwardapi-volume-dee9d26e-a779-4b39-8554-d9770a3b314b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107625244s Jun 19 14:42:05.556: INFO: Pod "downwardapi-volume-dee9d26e-a779-4b39-8554-d9770a3b314b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.240178923s STEP: Saw pod success Jun 19 14:42:05.556: INFO: Pod "downwardapi-volume-dee9d26e-a779-4b39-8554-d9770a3b314b" satisfied condition "success or failure" Jun 19 14:42:05.559: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-dee9d26e-a779-4b39-8554-d9770a3b314b container client-container: STEP: delete the pod Jun 19 14:42:05.633: INFO: Waiting for pod downwardapi-volume-dee9d26e-a779-4b39-8554-d9770a3b314b to disappear Jun 19 14:42:05.675: INFO: Pod downwardapi-volume-dee9d26e-a779-4b39-8554-d9770a3b314b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:42:05.675: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7838" for this suite. Jun 19 14:42:11.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:42:11.763: INFO: namespace projected-7838 deletion completed in 6.084492423s • [SLOW TEST:10.532 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:42:11.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jun 19 14:42:11.842: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7029,SelfLink:/api/v1/namespaces/watch-7029/configmaps/e2e-watch-test-configmap-a,UID:f8754d27-4e65-474b-8af5-afc7d8f16ca6,ResourceVersion:17334619,Generation:0,CreationTimestamp:2020-06-19 14:42:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 19 14:42:11.842: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7029,SelfLink:/api/v1/namespaces/watch-7029/configmaps/e2e-watch-test-configmap-a,UID:f8754d27-4e65-474b-8af5-afc7d8f16ca6,ResourceVersion:17334619,Generation:0,CreationTimestamp:2020-06-19 14:42:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jun 19 14:42:21.850: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7029,SelfLink:/api/v1/namespaces/watch-7029/configmaps/e2e-watch-test-configmap-a,UID:f8754d27-4e65-474b-8af5-afc7d8f16ca6,ResourceVersion:17334639,Generation:0,CreationTimestamp:2020-06-19 14:42:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jun 19 14:42:21.850: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7029,SelfLink:/api/v1/namespaces/watch-7029/configmaps/e2e-watch-test-configmap-a,UID:f8754d27-4e65-474b-8af5-afc7d8f16ca6,ResourceVersion:17334639,Generation:0,CreationTimestamp:2020-06-19 14:42:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jun 19 14:42:31.859: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7029,SelfLink:/api/v1/namespaces/watch-7029/configmaps/e2e-watch-test-configmap-a,UID:f8754d27-4e65-474b-8af5-afc7d8f16ca6,ResourceVersion:17334660,Generation:0,CreationTimestamp:2020-06-19 14:42:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 19 14:42:31.859: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7029,SelfLink:/api/v1/namespaces/watch-7029/configmaps/e2e-watch-test-configmap-a,UID:f8754d27-4e65-474b-8af5-afc7d8f16ca6,ResourceVersion:17334660,Generation:0,CreationTimestamp:2020-06-19 14:42:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jun 19 14:42:41.865: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7029,SelfLink:/api/v1/namespaces/watch-7029/configmaps/e2e-watch-test-configmap-a,UID:f8754d27-4e65-474b-8af5-afc7d8f16ca6,ResourceVersion:17334680,Generation:0,CreationTimestamp:2020-06-19 14:42:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jun 19 14:42:41.865: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-7029,SelfLink:/api/v1/namespaces/watch-7029/configmaps/e2e-watch-test-configmap-a,UID:f8754d27-4e65-474b-8af5-afc7d8f16ca6,ResourceVersion:17334680,Generation:0,CreationTimestamp:2020-06-19 14:42:11 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jun 19 14:42:51.873: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7029,SelfLink:/api/v1/namespaces/watch-7029/configmaps/e2e-watch-test-configmap-b,UID:d24b635d-e41e-44ed-be54-96ebbe0d52f5,ResourceVersion:17334700,Generation:0,CreationTimestamp:2020-06-19 14:42:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 19 14:42:51.873: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7029,SelfLink:/api/v1/namespaces/watch-7029/configmaps/e2e-watch-test-configmap-b,UID:d24b635d-e41e-44ed-be54-96ebbe0d52f5,ResourceVersion:17334700,Generation:0,CreationTimestamp:2020-06-19 14:42:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jun 19 14:43:01.880: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7029,SelfLink:/api/v1/namespaces/watch-7029/configmaps/e2e-watch-test-configmap-b,UID:d24b635d-e41e-44ed-be54-96ebbe0d52f5,ResourceVersion:17334721,Generation:0,CreationTimestamp:2020-06-19 14:42:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jun 19 14:43:01.880: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-7029,SelfLink:/api/v1/namespaces/watch-7029/configmaps/e2e-watch-test-configmap-b,UID:d24b635d-e41e-44ed-be54-96ebbe0d52f5,ResourceVersion:17334721,Generation:0,CreationTimestamp:2020-06-19 14:42:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:43:11.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7029" for this suite. Jun 19 14:43:17.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:43:18.035: INFO: namespace watch-7029 deletion completed in 6.136096714s • [SLOW TEST:66.271 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:43:18.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Jun 19 14:43:18.090: INFO: Waiting up to 5m0s for pod "pod-fbf50f98-8671-4612-8f97-e49de1285f00" in namespace "emptydir-4210" to be "success or failure" Jun 19 14:43:18.125: INFO: Pod "pod-fbf50f98-8671-4612-8f97-e49de1285f00": Phase="Pending", Reason="", readiness=false. Elapsed: 35.395131ms Jun 19 14:43:20.129: INFO: Pod "pod-fbf50f98-8671-4612-8f97-e49de1285f00": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039419207s Jun 19 14:43:22.134: INFO: Pod "pod-fbf50f98-8671-4612-8f97-e49de1285f00": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043843482s STEP: Saw pod success Jun 19 14:43:22.134: INFO: Pod "pod-fbf50f98-8671-4612-8f97-e49de1285f00" satisfied condition "success or failure" Jun 19 14:43:22.137: INFO: Trying to get logs from node iruya-worker2 pod pod-fbf50f98-8671-4612-8f97-e49de1285f00 container test-container: STEP: delete the pod Jun 19 14:43:22.168: INFO: Waiting for pod pod-fbf50f98-8671-4612-8f97-e49de1285f00 to disappear Jun 19 14:43:22.173: INFO: Pod pod-fbf50f98-8671-4612-8f97-e49de1285f00 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:43:22.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4210" for this suite. Jun 19 14:43:28.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:43:28.270: INFO: namespace emptydir-4210 deletion completed in 6.09487114s • [SLOW TEST:10.235 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:43:28.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-57c73c33-4056-485c-b7e6-f66180f7b6d9 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:43:34.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4745" for this suite. Jun 19 14:43:56.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:43:56.519: INFO: namespace configmap-4745 deletion completed in 22.11244134s • [SLOW TEST:28.249 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:43:56.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:44:27.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6686" for this suite. Jun 19 14:44:33.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:44:33.323: INFO: namespace container-runtime-6686 deletion completed in 6.112719163s • [SLOW TEST:36.804 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:44:33.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Jun 19 14:44:33.429: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1663" to be "success or failure" Jun 19 14:44:33.433: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.006789ms Jun 19 14:44:35.456: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026687773s Jun 19 14:44:37.460: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.030225068s Jun 19 14:44:39.464: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.035053428s STEP: Saw pod success Jun 19 14:44:39.464: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Jun 19 14:44:39.468: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Jun 19 14:44:39.504: INFO: Waiting for pod pod-host-path-test to disappear Jun 19 14:44:39.514: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:44:39.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1663" for this suite. Jun 19 14:44:45.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:44:45.658: INFO: namespace hostpath-1663 deletion completed in 6.140699159s • [SLOW TEST:12.335 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jun 19 14:44:45.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Jun 19 14:44:45.696: INFO: Waiting up to 5m0s for pod "pod-14c21228-a9ae-4dd4-a859-fa9688921f14" in namespace "emptydir-8286" to be "success or failure" Jun 19 14:44:45.714: INFO: Pod "pod-14c21228-a9ae-4dd4-a859-fa9688921f14": Phase="Pending", Reason="", readiness=false. Elapsed: 17.809584ms Jun 19 14:44:47.717: INFO: Pod "pod-14c21228-a9ae-4dd4-a859-fa9688921f14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021027395s Jun 19 14:44:49.720: INFO: Pod "pod-14c21228-a9ae-4dd4-a859-fa9688921f14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024095137s STEP: Saw pod success Jun 19 14:44:49.721: INFO: Pod "pod-14c21228-a9ae-4dd4-a859-fa9688921f14" satisfied condition "success or failure" Jun 19 14:44:49.723: INFO: Trying to get logs from node iruya-worker2 pod pod-14c21228-a9ae-4dd4-a859-fa9688921f14 container test-container: STEP: delete the pod Jun 19 14:44:49.752: INFO: Waiting for pod pod-14c21228-a9ae-4dd4-a859-fa9688921f14 to disappear Jun 19 14:44:49.766: INFO: Pod pod-14c21228-a9ae-4dd4-a859-fa9688921f14 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jun 19 14:44:49.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8286" for this suite. Jun 19 14:44:55.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jun 19 14:44:55.864: INFO: namespace emptydir-8286 deletion completed in 6.094263352s • [SLOW TEST:10.205 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSJun 19 14:44:55.865: INFO: Running AfterSuite actions on all nodes Jun 19 14:44:55.865: INFO: Running AfterSuite actions on node 1 Jun 19 14:44:55.865: INFO: Skipping dumping logs from cluster Summarizing 1 Failure: [Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:789 Ran 215 of 4412 Specs in 6543.624 seconds FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped --- FAIL: TestE2E (6543.84s) FAIL