I1122 22:17:21.934863 6 e2e.go:243] Starting e2e run "b397be2a-f775-49db-b336-3373bba13ad6" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1606083440 - Will randomize all specs Will run 215 of 4413 specs Nov 22 22:17:22.135: INFO: >>> kubeConfig: /root/.kube/config Nov 22 22:17:22.139: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Nov 22 22:17:22.155: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Nov 22 22:17:22.179: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Nov 22 22:17:22.179: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Nov 22 22:17:22.179: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Nov 22 22:17:22.191: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Nov 22 22:17:22.191: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Nov 22 22:17:22.191: INFO: e2e test version: v1.15.12 Nov 22 22:17:22.193: INFO: kube-apiserver version: v1.15.11 SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:17:22.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Nov 22 22:17:22.423: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-5b987c19-51db-4f20-8c6f-a5114ed180a1 STEP: Creating a pod to test consume secrets Nov 22 22:17:22.461: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e2f12ad3-9b61-45ca-9199-535ad94445fb" in namespace "projected-1949" to be "success or failure" Nov 22 22:17:22.490: INFO: Pod "pod-projected-secrets-e2f12ad3-9b61-45ca-9199-535ad94445fb": Phase="Pending", Reason="", readiness=false. Elapsed: 29.745429ms Nov 22 22:17:24.516: INFO: Pod "pod-projected-secrets-e2f12ad3-9b61-45ca-9199-535ad94445fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055571854s Nov 22 22:17:26.541: INFO: Pod "pod-projected-secrets-e2f12ad3-9b61-45ca-9199-535ad94445fb": Phase="Running", Reason="", readiness=true. Elapsed: 4.080835679s Nov 22 22:17:28.546: INFO: Pod "pod-projected-secrets-e2f12ad3-9b61-45ca-9199-535ad94445fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.085231094s STEP: Saw pod success Nov 22 22:17:28.546: INFO: Pod "pod-projected-secrets-e2f12ad3-9b61-45ca-9199-535ad94445fb" satisfied condition "success or failure" Nov 22 22:17:28.551: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-e2f12ad3-9b61-45ca-9199-535ad94445fb container projected-secret-volume-test: STEP: delete the pod Nov 22 22:17:28.587: INFO: Waiting for pod pod-projected-secrets-e2f12ad3-9b61-45ca-9199-535ad94445fb to disappear Nov 22 22:17:28.598: INFO: Pod pod-projected-secrets-e2f12ad3-9b61-45ca-9199-535ad94445fb no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:17:28.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1949" for this suite. Nov 22 22:17:34.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:17:34.684: INFO: namespace projected-1949 deletion completed in 6.08263724s • [SLOW TEST:12.492 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:17:34.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-162f7a74-9e17-423f-8c49-f5b7b2d5ae8a STEP: Creating secret with name s-test-opt-upd-d765ed44-083c-41e4-964f-27420cca6e07 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-162f7a74-9e17-423f-8c49-f5b7b2d5ae8a STEP: Updating secret s-test-opt-upd-d765ed44-083c-41e4-964f-27420cca6e07 STEP: Creating secret with name s-test-opt-create-e8f4d77c-db01-4b42-b1ca-bccd4d1cfa39 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:18:55.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4378" for this suite. Nov 22 22:19:17.683: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:19:17.753: INFO: namespace secrets-4378 deletion completed in 22.085331228s • [SLOW TEST:103.069 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:19:17.753: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 22 22:19:17.872: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a05ce49-5e39-45a5-b210-31240d9dc61e" in namespace "projected-4" to be "success or failure" Nov 22 22:19:17.881: INFO: Pod "downwardapi-volume-8a05ce49-5e39-45a5-b210-31240d9dc61e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.589934ms Nov 22 22:19:19.886: INFO: Pod "downwardapi-volume-8a05ce49-5e39-45a5-b210-31240d9dc61e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014111892s Nov 22 22:19:21.890: INFO: Pod "downwardapi-volume-8a05ce49-5e39-45a5-b210-31240d9dc61e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018178153s STEP: Saw pod success Nov 22 22:19:21.890: INFO: Pod "downwardapi-volume-8a05ce49-5e39-45a5-b210-31240d9dc61e" satisfied condition "success or failure" Nov 22 22:19:21.893: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8a05ce49-5e39-45a5-b210-31240d9dc61e container client-container: STEP: delete the pod Nov 22 22:19:21.928: INFO: Waiting for pod downwardapi-volume-8a05ce49-5e39-45a5-b210-31240d9dc61e to disappear Nov 22 22:19:21.944: INFO: Pod downwardapi-volume-8a05ce49-5e39-45a5-b210-31240d9dc61e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:19:21.944: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4" for this suite. Nov 22 22:19:27.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:19:28.035: INFO: namespace projected-4 deletion completed in 6.087256722s • [SLOW TEST:10.281 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:19:28.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 22 22:19:28.090: INFO: Waiting up to 5m0s for pod "downwardapi-volume-323c9ed7-996c-4ae1-a1d7-02278542c1e6" in namespace "projected-3706" to be "success or failure" Nov 22 22:19:28.128: INFO: Pod "downwardapi-volume-323c9ed7-996c-4ae1-a1d7-02278542c1e6": Phase="Pending", Reason="", readiness=false. Elapsed: 38.011678ms Nov 22 22:19:30.133: INFO: Pod "downwardapi-volume-323c9ed7-996c-4ae1-a1d7-02278542c1e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042444878s Nov 22 22:19:32.137: INFO: Pod "downwardapi-volume-323c9ed7-996c-4ae1-a1d7-02278542c1e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047119161s STEP: Saw pod success Nov 22 22:19:32.137: INFO: Pod "downwardapi-volume-323c9ed7-996c-4ae1-a1d7-02278542c1e6" satisfied condition "success or failure" Nov 22 22:19:32.141: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-323c9ed7-996c-4ae1-a1d7-02278542c1e6 container client-container: STEP: delete the pod Nov 22 22:19:32.161: INFO: Waiting for pod downwardapi-volume-323c9ed7-996c-4ae1-a1d7-02278542c1e6 to disappear Nov 22 22:19:32.206: INFO: Pod downwardapi-volume-323c9ed7-996c-4ae1-a1d7-02278542c1e6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:19:32.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3706" for this suite. Nov 22 22:19:40.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:19:40.301: INFO: namespace projected-3706 deletion completed in 8.091034189s • [SLOW TEST:12.266 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:19:40.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-qbkj STEP: Creating a pod to test atomic-volume-subpath Nov 22 22:19:40.514: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-qbkj" in namespace "subpath-2180" to be "success or failure" Nov 22 22:19:40.538: INFO: Pod "pod-subpath-test-projected-qbkj": Phase="Pending", Reason="", readiness=false. Elapsed: 23.091657ms Nov 22 22:19:42.686: INFO: Pod "pod-subpath-test-projected-qbkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1711224s Nov 22 22:19:44.696: INFO: Pod "pod-subpath-test-projected-qbkj": Phase="Running", Reason="", readiness=true. Elapsed: 4.181735421s Nov 22 22:19:46.817: INFO: Pod "pod-subpath-test-projected-qbkj": Phase="Running", Reason="", readiness=true. Elapsed: 6.302572177s Nov 22 22:19:48.821: INFO: Pod "pod-subpath-test-projected-qbkj": Phase="Running", Reason="", readiness=true. Elapsed: 8.306492833s Nov 22 22:19:50.841: INFO: Pod "pod-subpath-test-projected-qbkj": Phase="Running", Reason="", readiness=true. Elapsed: 10.326776975s Nov 22 22:19:52.846: INFO: Pod "pod-subpath-test-projected-qbkj": Phase="Running", Reason="", readiness=true. Elapsed: 12.331318228s Nov 22 22:19:54.850: INFO: Pod "pod-subpath-test-projected-qbkj": Phase="Running", Reason="", readiness=true. Elapsed: 14.335371599s Nov 22 22:19:56.854: INFO: Pod "pod-subpath-test-projected-qbkj": Phase="Running", Reason="", readiness=true. Elapsed: 16.339303112s Nov 22 22:19:58.857: INFO: Pod "pod-subpath-test-projected-qbkj": Phase="Running", Reason="", readiness=true. Elapsed: 18.342983234s Nov 22 22:20:00.862: INFO: Pod "pod-subpath-test-projected-qbkj": Phase="Running", Reason="", readiness=true. Elapsed: 20.347955459s Nov 22 22:20:02.866: INFO: Pod "pod-subpath-test-projected-qbkj": Phase="Running", Reason="", readiness=true. Elapsed: 22.35179836s Nov 22 22:20:04.869: INFO: Pod "pod-subpath-test-projected-qbkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.354711006s STEP: Saw pod success Nov 22 22:20:04.869: INFO: Pod "pod-subpath-test-projected-qbkj" satisfied condition "success or failure" Nov 22 22:20:04.871: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-qbkj container test-container-subpath-projected-qbkj: STEP: delete the pod Nov 22 22:20:04.886: INFO: Waiting for pod pod-subpath-test-projected-qbkj to disappear Nov 22 22:20:04.890: INFO: Pod pod-subpath-test-projected-qbkj no longer exists STEP: Deleting pod pod-subpath-test-projected-qbkj Nov 22 22:20:04.890: INFO: Deleting pod "pod-subpath-test-projected-qbkj" in namespace "subpath-2180" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:20:04.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2180" for this suite. Nov 22 22:20:11.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:20:11.200: INFO: namespace subpath-2180 deletion completed in 6.255795483s • [SLOW TEST:30.899 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:20:11.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Nov 22 22:20:19.366: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:19.397: INFO: Pod pod-with-poststart-exec-hook still exists Nov 22 22:20:21.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:21.401: INFO: Pod pod-with-poststart-exec-hook still exists Nov 22 22:20:23.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:23.401: INFO: Pod pod-with-poststart-exec-hook still exists Nov 22 22:20:25.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:25.401: INFO: Pod pod-with-poststart-exec-hook still exists Nov 22 22:20:27.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:27.401: INFO: Pod pod-with-poststart-exec-hook still exists Nov 22 22:20:29.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:29.401: INFO: Pod pod-with-poststart-exec-hook still exists Nov 22 22:20:31.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:31.401: INFO: Pod pod-with-poststart-exec-hook still exists Nov 22 22:20:33.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:33.401: INFO: Pod pod-with-poststart-exec-hook still exists Nov 22 22:20:35.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:35.401: INFO: Pod pod-with-poststart-exec-hook still exists Nov 22 22:20:37.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:37.410: INFO: Pod pod-with-poststart-exec-hook still exists Nov 22 22:20:39.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:39.402: INFO: Pod pod-with-poststart-exec-hook still exists Nov 22 22:20:41.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:41.404: INFO: Pod pod-with-poststart-exec-hook still exists Nov 22 22:20:43.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:43.401: INFO: Pod pod-with-poststart-exec-hook still exists Nov 22 22:20:45.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:45.401: INFO: Pod pod-with-poststart-exec-hook still exists Nov 22 22:20:47.397: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Nov 22 22:20:47.400: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:20:47.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9193" for this suite. Nov 22 22:21:09.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:21:09.504: INFO: namespace container-lifecycle-hook-9193 deletion completed in 22.099891288s • [SLOW TEST:58.304 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:21:09.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-391, will wait for the garbage collector to delete the pods Nov 22 22:21:15.638: INFO: Deleting Job.batch foo took: 5.49782ms Nov 22 22:21:15.938: INFO: Terminating Job.batch foo pods took: 300.29461ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:21:55.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-391" for this suite. Nov 22 22:22:01.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:22:01.725: INFO: namespace job-391 deletion completed in 6.081420473s • [SLOW TEST:52.221 seconds] [sig-apps] Job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:22:01.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Nov 22 22:22:01.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-840' Nov 22 22:22:04.606: INFO: stderr: "" Nov 22 22:22:04.606: INFO: stdout: "pod/pause created\n" Nov 22 22:22:04.606: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Nov 22 22:22:04.606: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-840" to be "running and ready" Nov 22 22:22:04.609: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.245028ms Nov 22 22:22:06.614: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007785073s Nov 22 22:22:08.618: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.011759013s Nov 22 22:22:08.618: INFO: Pod "pause" satisfied condition "running and ready" Nov 22 22:22:08.618: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Nov 22 22:22:08.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-840' Nov 22 22:22:08.712: INFO: stderr: "" Nov 22 22:22:08.712: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Nov 22 22:22:08.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-840' Nov 22 22:22:08.809: INFO: stderr: "" Nov 22 22:22:08.809: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Nov 22 22:22:08.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-840' Nov 22 22:22:08.916: INFO: stderr: "" Nov 22 22:22:08.916: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Nov 22 22:22:08.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-840' Nov 22 22:22:08.999: INFO: stderr: "" Nov 22 22:22:08.999: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Nov 22 22:22:08.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-840' Nov 22 22:22:09.118: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 22 22:22:09.118: INFO: stdout: "pod \"pause\" force deleted\n" Nov 22 22:22:09.118: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-840' Nov 22 22:22:09.211: INFO: stderr: "No resources found.\n" Nov 22 22:22:09.211: INFO: stdout: "" Nov 22 22:22:09.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-840 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 22 22:22:09.294: INFO: stderr: "" Nov 22 22:22:09.294: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:22:09.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-840" for this suite. Nov 22 22:22:15.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:22:15.560: INFO: namespace kubectl-840 deletion completed in 6.262916008s • [SLOW TEST:13.834 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:22:15.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:22:15.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-756" for this suite. Nov 22 22:22:37.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:22:37.943: INFO: namespace pods-756 deletion completed in 22.096906889s • [SLOW TEST:22.382 seconds] [k8s.io] [sig-node] Pods Extended /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:22:37.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Nov 22 22:22:38.031: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6911' Nov 22 22:22:38.360: INFO: stderr: "" Nov 22 22:22:38.360: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 22 22:22:38.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6911' Nov 22 22:22:38.487: INFO: stderr: "" Nov 22 22:22:38.487: INFO: stdout: "update-demo-nautilus-b527k update-demo-nautilus-tckw4 " Nov 22 22:22:38.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b527k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6911' Nov 22 22:22:38.571: INFO: stderr: "" Nov 22 22:22:38.571: INFO: stdout: "" Nov 22 22:22:38.571: INFO: update-demo-nautilus-b527k is created but not running Nov 22 22:22:43.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6911' Nov 22 22:22:43.676: INFO: stderr: "" Nov 22 22:22:43.676: INFO: stdout: "update-demo-nautilus-b527k update-demo-nautilus-tckw4 " Nov 22 22:22:43.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b527k -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6911' Nov 22 22:22:43.769: INFO: stderr: "" Nov 22 22:22:43.769: INFO: stdout: "true" Nov 22 22:22:43.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-b527k -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6911' Nov 22 22:22:43.849: INFO: stderr: "" Nov 22 22:22:43.849: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 22 22:22:43.850: INFO: validating pod update-demo-nautilus-b527k Nov 22 22:22:43.855: INFO: got data: { "image": "nautilus.jpg" } Nov 22 22:22:43.855: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 22 22:22:43.855: INFO: update-demo-nautilus-b527k is verified up and running Nov 22 22:22:43.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tckw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6911' Nov 22 22:22:43.965: INFO: stderr: "" Nov 22 22:22:43.965: INFO: stdout: "true" Nov 22 22:22:43.965: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tckw4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6911' Nov 22 22:22:44.069: INFO: stderr: "" Nov 22 22:22:44.069: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 22 22:22:44.069: INFO: validating pod update-demo-nautilus-tckw4 Nov 22 22:22:44.077: INFO: got data: { "image": "nautilus.jpg" } Nov 22 22:22:44.077: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 22 22:22:44.078: INFO: update-demo-nautilus-tckw4 is verified up and running STEP: rolling-update to new replication controller Nov 22 22:22:44.079: INFO: scanned /root for discovery docs: Nov 22 22:22:44.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6911' Nov 22 22:23:08.310: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Nov 22 22:23:08.310: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 22 22:23:08.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6911' Nov 22 22:23:08.438: INFO: stderr: "" Nov 22 22:23:08.438: INFO: stdout: "update-demo-kitten-q7wqr update-demo-kitten-xv5f4 " Nov 22 22:23:08.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-q7wqr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6911' Nov 22 22:23:08.588: INFO: stderr: "" Nov 22 22:23:08.588: INFO: stdout: "true" Nov 22 22:23:08.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-q7wqr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6911' Nov 22 22:23:08.682: INFO: stderr: "" Nov 22 22:23:08.682: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Nov 22 22:23:08.682: INFO: validating pod update-demo-kitten-q7wqr Nov 22 22:23:08.707: INFO: got data: { "image": "kitten.jpg" } Nov 22 22:23:08.707: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Nov 22 22:23:08.707: INFO: update-demo-kitten-q7wqr is verified up and running Nov 22 22:23:08.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xv5f4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6911' Nov 22 22:23:08.809: INFO: stderr: "" Nov 22 22:23:08.809: INFO: stdout: "true" Nov 22 22:23:08.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-xv5f4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6911' Nov 22 22:23:08.921: INFO: stderr: "" Nov 22 22:23:08.921: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Nov 22 22:23:08.921: INFO: validating pod update-demo-kitten-xv5f4 Nov 22 22:23:08.936: INFO: got data: { "image": "kitten.jpg" } Nov 22 22:23:08.936: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Nov 22 22:23:08.937: INFO: update-demo-kitten-xv5f4 is verified up and running [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:23:08.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6911" for this suite. Nov 22 22:23:32.974: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:23:33.041: INFO: namespace kubectl-6911 deletion completed in 24.100180019s • [SLOW TEST:55.097 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:23:33.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-5a64352b-c36d-4f81-9343-fa4839774126 [AfterEach] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:23:33.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7273" for this suite. Nov 22 22:23:39.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:23:39.223: INFO: namespace configmap-7273 deletion completed in 6.116478629s • [SLOW TEST:6.182 seconds] [sig-node] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:23:39.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 22 22:23:39.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9840' Nov 22 22:23:39.927: INFO: stderr: "" Nov 22 22:23:39.927: INFO: stdout: "replicationcontroller/redis-master created\n" Nov 22 22:23:39.927: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9840' Nov 22 22:23:40.560: INFO: stderr: "" Nov 22 22:23:40.560: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Nov 22 22:23:41.564: INFO: Selector matched 1 pods for map[app:redis] Nov 22 22:23:41.564: INFO: Found 0 / 1 Nov 22 22:23:43.514: INFO: Selector matched 1 pods for map[app:redis] Nov 22 22:23:43.514: INFO: Found 0 / 1 Nov 22 22:23:43.565: INFO: Selector matched 1 pods for map[app:redis] Nov 22 22:23:43.565: INFO: Found 0 / 1 Nov 22 22:23:44.564: INFO: Selector matched 1 pods for map[app:redis] Nov 22 22:23:44.564: INFO: Found 0 / 1 Nov 22 22:23:45.565: INFO: Selector matched 1 pods for map[app:redis] Nov 22 22:23:45.565: INFO: Found 0 / 1 Nov 22 22:23:46.564: INFO: Selector matched 1 pods for map[app:redis] Nov 22 22:23:46.564: INFO: Found 1 / 1 Nov 22 22:23:46.564: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Nov 22 22:23:46.567: INFO: Selector matched 1 pods for map[app:redis] Nov 22 22:23:46.567: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 22 22:23:46.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-gd5qn --namespace=kubectl-9840' Nov 22 22:23:46.678: INFO: stderr: "" Nov 22 22:23:46.678: INFO: stdout: "Name: redis-master-gd5qn\nNamespace: kubectl-9840\nPriority: 0\nNode: iruya-worker/172.18.0.6\nStart Time: Sun, 22 Nov 2020 22:23:40 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.154\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://0205cdb85dcdeeedbc5c3831f171216d1b6a0a6fd007692c93286cbb664d724a\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 22 Nov 2020 22:23:44 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-snx25 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-snx25:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-snx25\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 6s default-scheduler Successfully assigned kubectl-9840/redis-master-gd5qn to iruya-worker\n Normal Pulled 5s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker Created container redis-master\n Normal Started 2s kubelet, iruya-worker Started container redis-master\n" Nov 22 22:23:46.678: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9840' Nov 22 22:23:46.784: INFO: stderr: "" Nov 22 22:23:46.784: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9840\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: redis-master-gd5qn\n" Nov 22 22:23:46.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9840' Nov 22 22:23:46.888: INFO: stderr: "" Nov 22 22:23:46.889: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9840\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.100.147.28\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.154:6379\nSession Affinity: None\nEvents: \n" Nov 22 22:23:46.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Nov 22 22:23:47.011: INFO: stderr: "" Nov 22 22:23:47.011: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Wed, 23 Sep 2020 08:25:31 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 22 Nov 2020 22:23:29 +0000 Wed, 23 Sep 2020 08:25:31 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 22 Nov 2020 22:23:29 +0000 Wed, 23 Sep 2020 08:25:31 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 22 Nov 2020 22:23:29 +0000 Wed, 23 Sep 2020 08:25:31 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 22 Nov 2020 22:23:29 +0000 Wed, 23 Sep 2020 08:26:01 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.18.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759868Ki\n pods: 110\nSystem Info:\n Machine ID: 75bedc8ea3a84920a6257d408ae4fc72\n System UUID: f7c1d795-23db-4f0f-aa92-a051f5bbc85d\n Boot ID: b267d78b-f69b-4338-80e8-3f4944338e5d\n Kernel Version: 4.15.0-118-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.3-14-g449e9269\n Kubelet Version: v1.15.11\n Kube-Proxy Version: v1.15.11\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-5d4dd4b4db-ktm6r 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 60d\n kube-system coredns-5d4dd4b4db-m9gbg 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 60d\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 60d\n kube-system kindnet-rv6n5 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 60d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 60d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 60d\n kube-system kube-proxy-zcw5n 0 (0%) 0 (0%) 0 (0%) 0 (0%) 60d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 60d\n local-path-storage local-path-provisioner-668779bd7-t77bq 0 (0%) 0 (0%) 0 (0%) 0 (0%) 60d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Nov 22 22:23:47.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9840' Nov 22 22:23:47.113: INFO: stderr: "" Nov 22 22:23:47.113: INFO: stdout: "Name: kubectl-9840\nLabels: e2e-framework=kubectl\n e2e-run=b397be2a-f775-49db-b336-3373bba13ad6\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:23:47.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9840" for this suite. Nov 22 22:24:11.132: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:24:11.268: INFO: namespace kubectl-9840 deletion completed in 24.150671073s • [SLOW TEST:32.044 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:24:11.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-fbf8d4b4-14c6-4c17-8a84-aca7b5ef96ce STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:24:17.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1693" for this suite. Nov 22 22:24:39.443: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:24:39.524: INFO: namespace configmap-1693 deletion completed in 22.095306088s • [SLOW TEST:28.256 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:24:39.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Nov 22 22:24:39.599: INFO: Waiting up to 5m0s for pod "pod-24f0c07f-c8fb-465c-8e9d-89046b90e6a2" in namespace "emptydir-3003" to be "success or failure" Nov 22 22:24:39.608: INFO: Pod "pod-24f0c07f-c8fb-465c-8e9d-89046b90e6a2": Phase="Pending", Reason="", readiness=false. Elapsed: 9.452287ms Nov 22 22:24:41.612: INFO: Pod "pod-24f0c07f-c8fb-465c-8e9d-89046b90e6a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012883067s Nov 22 22:24:43.617: INFO: Pod "pod-24f0c07f-c8fb-465c-8e9d-89046b90e6a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018403662s Nov 22 22:24:45.620: INFO: Pod "pod-24f0c07f-c8fb-465c-8e9d-89046b90e6a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.021505542s STEP: Saw pod success Nov 22 22:24:45.621: INFO: Pod "pod-24f0c07f-c8fb-465c-8e9d-89046b90e6a2" satisfied condition "success or failure" Nov 22 22:24:45.623: INFO: Trying to get logs from node iruya-worker pod pod-24f0c07f-c8fb-465c-8e9d-89046b90e6a2 container test-container: STEP: delete the pod Nov 22 22:24:45.673: INFO: Waiting for pod pod-24f0c07f-c8fb-465c-8e9d-89046b90e6a2 to disappear Nov 22 22:24:45.685: INFO: Pod pod-24f0c07f-c8fb-465c-8e9d-89046b90e6a2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:24:45.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3003" for this suite. Nov 22 22:24:51.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:24:51.944: INFO: namespace emptydir-3003 deletion completed in 6.255028204s • [SLOW TEST:12.419 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:24:51.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Nov 22 22:25:02.123: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:25:02.123: INFO: >>> kubeConfig: /root/.kube/config I1122 22:25:02.162559 6 log.go:172] (0xc0024a6f20) (0xc0017c90e0) Create stream I1122 22:25:02.162596 6 log.go:172] (0xc0024a6f20) (0xc0017c90e0) Stream added, broadcasting: 1 I1122 22:25:02.165143 6 log.go:172] (0xc0024a6f20) Reply frame received for 1 I1122 22:25:02.165194 6 log.go:172] (0xc0024a6f20) (0xc002b8b220) Create stream I1122 22:25:02.165207 6 log.go:172] (0xc0024a6f20) (0xc002b8b220) Stream added, broadcasting: 3 I1122 22:25:02.166341 6 log.go:172] (0xc0024a6f20) Reply frame received for 3 I1122 22:25:02.166391 6 log.go:172] (0xc0024a6f20) (0xc002585c20) Create stream I1122 22:25:02.166405 6 log.go:172] (0xc0024a6f20) (0xc002585c20) Stream added, broadcasting: 5 I1122 22:25:02.167422 6 log.go:172] (0xc0024a6f20) Reply frame received for 5 I1122 22:25:02.266270 6 log.go:172] (0xc0024a6f20) Data frame received for 5 I1122 22:25:02.266317 6 log.go:172] (0xc0024a6f20) Data frame received for 3 I1122 22:25:02.266352 6 log.go:172] (0xc002b8b220) (3) Data frame handling I1122 22:25:02.266362 6 log.go:172] (0xc002b8b220) (3) Data frame sent I1122 22:25:02.266372 6 log.go:172] (0xc0024a6f20) Data frame received for 3 I1122 22:25:02.266383 6 log.go:172] (0xc002b8b220) (3) Data frame handling I1122 22:25:02.266404 6 log.go:172] (0xc002585c20) (5) Data frame handling I1122 22:25:02.267631 6 log.go:172] (0xc0024a6f20) Data frame received for 1 I1122 22:25:02.267654 6 log.go:172] (0xc0017c90e0) (1) Data frame handling I1122 22:25:02.267675 6 log.go:172] (0xc0017c90e0) (1) Data frame sent I1122 22:25:02.267700 6 log.go:172] (0xc0024a6f20) (0xc0017c90e0) Stream removed, broadcasting: 1 I1122 22:25:02.267729 6 log.go:172] (0xc0024a6f20) Go away received I1122 22:25:02.268020 6 log.go:172] (0xc0024a6f20) (0xc0017c90e0) Stream removed, broadcasting: 1 I1122 22:25:02.268040 6 log.go:172] (0xc0024a6f20) (0xc002b8b220) Stream removed, broadcasting: 3 I1122 22:25:02.268051 6 log.go:172] (0xc0024a6f20) (0xc002585c20) Stream removed, broadcasting: 5 Nov 22 22:25:02.268: INFO: Exec stderr: "" Nov 22 22:25:02.268: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:25:02.268: INFO: >>> kubeConfig: /root/.kube/config I1122 22:25:02.296805 6 log.go:172] (0xc000d24d10) (0xc002585f40) Create stream I1122 22:25:02.296917 6 log.go:172] (0xc000d24d10) (0xc002585f40) Stream added, broadcasting: 1 I1122 22:25:02.299232 6 log.go:172] (0xc000d24d10) Reply frame received for 1 I1122 22:25:02.299279 6 log.go:172] (0xc000d24d10) (0xc002f22000) Create stream I1122 22:25:02.299299 6 log.go:172] (0xc000d24d10) (0xc002f22000) Stream added, broadcasting: 3 I1122 22:25:02.300100 6 log.go:172] (0xc000d24d10) Reply frame received for 3 I1122 22:25:02.300132 6 log.go:172] (0xc000d24d10) (0xc0014ba460) Create stream I1122 22:25:02.300147 6 log.go:172] (0xc000d24d10) (0xc0014ba460) Stream added, broadcasting: 5 I1122 22:25:02.301051 6 log.go:172] (0xc000d24d10) Reply frame received for 5 I1122 22:25:02.362124 6 log.go:172] (0xc000d24d10) Data frame received for 3 I1122 22:25:02.362160 6 log.go:172] (0xc002f22000) (3) Data frame handling I1122 22:25:02.362175 6 log.go:172] (0xc002f22000) (3) Data frame sent I1122 22:25:02.362184 6 log.go:172] (0xc000d24d10) Data frame received for 3 I1122 22:25:02.362193 6 log.go:172] (0xc002f22000) (3) Data frame handling I1122 22:25:02.362273 6 log.go:172] (0xc000d24d10) Data frame received for 5 I1122 22:25:02.362281 6 log.go:172] (0xc0014ba460) (5) Data frame handling I1122 22:25:02.363750 6 log.go:172] (0xc000d24d10) Data frame received for 1 I1122 22:25:02.363790 6 log.go:172] (0xc002585f40) (1) Data frame handling I1122 22:25:02.363846 6 log.go:172] (0xc002585f40) (1) Data frame sent I1122 22:25:02.363866 6 log.go:172] (0xc000d24d10) (0xc002585f40) Stream removed, broadcasting: 1 I1122 22:25:02.363890 6 log.go:172] (0xc000d24d10) Go away received I1122 22:25:02.363986 6 log.go:172] (0xc000d24d10) (0xc002585f40) Stream removed, broadcasting: 1 I1122 22:25:02.364011 6 log.go:172] (0xc000d24d10) (0xc002f22000) Stream removed, broadcasting: 3 I1122 22:25:02.364020 6 log.go:172] (0xc000d24d10) (0xc0014ba460) Stream removed, broadcasting: 5 Nov 22 22:25:02.364: INFO: Exec stderr: "" Nov 22 22:25:02.364: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:25:02.364: INFO: >>> kubeConfig: /root/.kube/config I1122 22:25:02.390118 6 log.go:172] (0xc00235cf20) (0xc002b8b5e0) Create stream I1122 22:25:02.390142 6 log.go:172] (0xc00235cf20) (0xc002b8b5e0) Stream added, broadcasting: 1 I1122 22:25:02.392484 6 log.go:172] (0xc00235cf20) Reply frame received for 1 I1122 22:25:02.392528 6 log.go:172] (0xc00235cf20) (0xc0017c9180) Create stream I1122 22:25:02.392539 6 log.go:172] (0xc00235cf20) (0xc0017c9180) Stream added, broadcasting: 3 I1122 22:25:02.393716 6 log.go:172] (0xc00235cf20) Reply frame received for 3 I1122 22:25:02.393783 6 log.go:172] (0xc00235cf20) (0xc002f220a0) Create stream I1122 22:25:02.393797 6 log.go:172] (0xc00235cf20) (0xc002f220a0) Stream added, broadcasting: 5 I1122 22:25:02.394840 6 log.go:172] (0xc00235cf20) Reply frame received for 5 I1122 22:25:02.456530 6 log.go:172] (0xc00235cf20) Data frame received for 5 I1122 22:25:02.456588 6 log.go:172] (0xc002f220a0) (5) Data frame handling I1122 22:25:02.456633 6 log.go:172] (0xc00235cf20) Data frame received for 3 I1122 22:25:02.456659 6 log.go:172] (0xc0017c9180) (3) Data frame handling I1122 22:25:02.456694 6 log.go:172] (0xc0017c9180) (3) Data frame sent I1122 22:25:02.456710 6 log.go:172] (0xc00235cf20) Data frame received for 3 I1122 22:25:02.456722 6 log.go:172] (0xc0017c9180) (3) Data frame handling I1122 22:25:02.458222 6 log.go:172] (0xc00235cf20) Data frame received for 1 I1122 22:25:02.458255 6 log.go:172] (0xc002b8b5e0) (1) Data frame handling I1122 22:25:02.458270 6 log.go:172] (0xc002b8b5e0) (1) Data frame sent I1122 22:25:02.458296 6 log.go:172] (0xc00235cf20) (0xc002b8b5e0) Stream removed, broadcasting: 1 I1122 22:25:02.458323 6 log.go:172] (0xc00235cf20) Go away received I1122 22:25:02.458432 6 log.go:172] (0xc00235cf20) (0xc002b8b5e0) Stream removed, broadcasting: 1 I1122 22:25:02.458454 6 log.go:172] (0xc00235cf20) (0xc0017c9180) Stream removed, broadcasting: 3 I1122 22:25:02.458473 6 log.go:172] (0xc00235cf20) (0xc002f220a0) Stream removed, broadcasting: 5 Nov 22 22:25:02.458: INFO: Exec stderr: "" Nov 22 22:25:02.458: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:25:02.458: INFO: >>> kubeConfig: /root/.kube/config I1122 22:25:02.488175 6 log.go:172] (0xc0038cbef0) (0xc0014ba8c0) Create stream I1122 22:25:02.488221 6 log.go:172] (0xc0038cbef0) (0xc0014ba8c0) Stream added, broadcasting: 1 I1122 22:25:02.490783 6 log.go:172] (0xc0038cbef0) Reply frame received for 1 I1122 22:25:02.490825 6 log.go:172] (0xc0038cbef0) (0xc002f22140) Create stream I1122 22:25:02.490840 6 log.go:172] (0xc0038cbef0) (0xc002f22140) Stream added, broadcasting: 3 I1122 22:25:02.491895 6 log.go:172] (0xc0038cbef0) Reply frame received for 3 I1122 22:25:02.491938 6 log.go:172] (0xc0038cbef0) (0xc0017c9220) Create stream I1122 22:25:02.491950 6 log.go:172] (0xc0038cbef0) (0xc0017c9220) Stream added, broadcasting: 5 I1122 22:25:02.493433 6 log.go:172] (0xc0038cbef0) Reply frame received for 5 I1122 22:25:02.547141 6 log.go:172] (0xc0038cbef0) Data frame received for 3 I1122 22:25:02.547192 6 log.go:172] (0xc002f22140) (3) Data frame handling I1122 22:25:02.547220 6 log.go:172] (0xc002f22140) (3) Data frame sent I1122 22:25:02.547300 6 log.go:172] (0xc0038cbef0) Data frame received for 5 I1122 22:25:02.547347 6 log.go:172] (0xc0017c9220) (5) Data frame handling I1122 22:25:02.547403 6 log.go:172] (0xc0038cbef0) Data frame received for 3 I1122 22:25:02.547430 6 log.go:172] (0xc002f22140) (3) Data frame handling I1122 22:25:02.549295 6 log.go:172] (0xc0038cbef0) Data frame received for 1 I1122 22:25:02.549311 6 log.go:172] (0xc0014ba8c0) (1) Data frame handling I1122 22:25:02.549320 6 log.go:172] (0xc0014ba8c0) (1) Data frame sent I1122 22:25:02.549332 6 log.go:172] (0xc0038cbef0) (0xc0014ba8c0) Stream removed, broadcasting: 1 I1122 22:25:02.549425 6 log.go:172] (0xc0038cbef0) (0xc0014ba8c0) Stream removed, broadcasting: 1 I1122 22:25:02.549437 6 log.go:172] (0xc0038cbef0) (0xc002f22140) Stream removed, broadcasting: 3 I1122 22:25:02.549446 6 log.go:172] (0xc0038cbef0) (0xc0017c9220) Stream removed, broadcasting: 5 Nov 22 22:25:02.549: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Nov 22 22:25:02.549: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:25:02.549: INFO: >>> kubeConfig: /root/.kube/config I1122 22:25:02.549544 6 log.go:172] (0xc0038cbef0) Go away received I1122 22:25:02.580700 6 log.go:172] (0xc000d25e40) (0xc002f22460) Create stream I1122 22:25:02.580740 6 log.go:172] (0xc000d25e40) (0xc002f22460) Stream added, broadcasting: 1 I1122 22:25:02.583179 6 log.go:172] (0xc000d25e40) Reply frame received for 1 I1122 22:25:02.583216 6 log.go:172] (0xc000d25e40) (0xc0017c92c0) Create stream I1122 22:25:02.583231 6 log.go:172] (0xc000d25e40) (0xc0017c92c0) Stream added, broadcasting: 3 I1122 22:25:02.584013 6 log.go:172] (0xc000d25e40) Reply frame received for 3 I1122 22:25:02.584045 6 log.go:172] (0xc000d25e40) (0xc002b8b680) Create stream I1122 22:25:02.584057 6 log.go:172] (0xc000d25e40) (0xc002b8b680) Stream added, broadcasting: 5 I1122 22:25:02.584983 6 log.go:172] (0xc000d25e40) Reply frame received for 5 I1122 22:25:02.656328 6 log.go:172] (0xc000d25e40) Data frame received for 5 I1122 22:25:02.656360 6 log.go:172] (0xc002b8b680) (5) Data frame handling I1122 22:25:02.656403 6 log.go:172] (0xc000d25e40) Data frame received for 3 I1122 22:25:02.656431 6 log.go:172] (0xc0017c92c0) (3) Data frame handling I1122 22:25:02.656455 6 log.go:172] (0xc0017c92c0) (3) Data frame sent I1122 22:25:02.656471 6 log.go:172] (0xc000d25e40) Data frame received for 3 I1122 22:25:02.656502 6 log.go:172] (0xc0017c92c0) (3) Data frame handling I1122 22:25:02.657897 6 log.go:172] (0xc000d25e40) Data frame received for 1 I1122 22:25:02.657919 6 log.go:172] (0xc002f22460) (1) Data frame handling I1122 22:25:02.657932 6 log.go:172] (0xc002f22460) (1) Data frame sent I1122 22:25:02.657945 6 log.go:172] (0xc000d25e40) (0xc002f22460) Stream removed, broadcasting: 1 I1122 22:25:02.658000 6 log.go:172] (0xc000d25e40) Go away received I1122 22:25:02.658033 6 log.go:172] (0xc000d25e40) (0xc002f22460) Stream removed, broadcasting: 1 I1122 22:25:02.658049 6 log.go:172] (0xc000d25e40) (0xc0017c92c0) Stream removed, broadcasting: 3 I1122 22:25:02.658059 6 log.go:172] (0xc000d25e40) (0xc002b8b680) Stream removed, broadcasting: 5 Nov 22 22:25:02.658: INFO: Exec stderr: "" Nov 22 22:25:02.658: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:25:02.658: INFO: >>> kubeConfig: /root/.kube/config I1122 22:25:02.683408 6 log.go:172] (0xc000a236b0) (0xc00057cfa0) Create stream I1122 22:25:02.683433 6 log.go:172] (0xc000a236b0) (0xc00057cfa0) Stream added, broadcasting: 1 I1122 22:25:02.685582 6 log.go:172] (0xc000a236b0) Reply frame received for 1 I1122 22:25:02.685612 6 log.go:172] (0xc000a236b0) (0xc002b8b720) Create stream I1122 22:25:02.685621 6 log.go:172] (0xc000a236b0) (0xc002b8b720) Stream added, broadcasting: 3 I1122 22:25:02.686442 6 log.go:172] (0xc000a236b0) Reply frame received for 3 I1122 22:25:02.686472 6 log.go:172] (0xc000a236b0) (0xc0017c9360) Create stream I1122 22:25:02.686482 6 log.go:172] (0xc000a236b0) (0xc0017c9360) Stream added, broadcasting: 5 I1122 22:25:02.687415 6 log.go:172] (0xc000a236b0) Reply frame received for 5 I1122 22:25:02.751927 6 log.go:172] (0xc000a236b0) Data frame received for 5 I1122 22:25:02.751976 6 log.go:172] (0xc0017c9360) (5) Data frame handling I1122 22:25:02.752001 6 log.go:172] (0xc000a236b0) Data frame received for 3 I1122 22:25:02.752020 6 log.go:172] (0xc002b8b720) (3) Data frame handling I1122 22:25:02.752033 6 log.go:172] (0xc002b8b720) (3) Data frame sent I1122 22:25:02.752048 6 log.go:172] (0xc000a236b0) Data frame received for 3 I1122 22:25:02.752057 6 log.go:172] (0xc002b8b720) (3) Data frame handling I1122 22:25:02.753479 6 log.go:172] (0xc000a236b0) Data frame received for 1 I1122 22:25:02.753511 6 log.go:172] (0xc00057cfa0) (1) Data frame handling I1122 22:25:02.753523 6 log.go:172] (0xc00057cfa0) (1) Data frame sent I1122 22:25:02.753637 6 log.go:172] (0xc000a236b0) (0xc00057cfa0) Stream removed, broadcasting: 1 I1122 22:25:02.753727 6 log.go:172] (0xc000a236b0) Go away received I1122 22:25:02.753774 6 log.go:172] (0xc000a236b0) (0xc00057cfa0) Stream removed, broadcasting: 1 I1122 22:25:02.753793 6 log.go:172] (0xc000a236b0) (0xc002b8b720) Stream removed, broadcasting: 3 I1122 22:25:02.753805 6 log.go:172] (0xc000a236b0) (0xc0017c9360) Stream removed, broadcasting: 5 Nov 22 22:25:02.753: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Nov 22 22:25:02.753: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:25:02.753: INFO: >>> kubeConfig: /root/.kube/config I1122 22:25:02.779465 6 log.go:172] (0xc003f0f970) (0xc0017c97c0) Create stream I1122 22:25:02.779504 6 log.go:172] (0xc003f0f970) (0xc0017c97c0) Stream added, broadcasting: 1 I1122 22:25:02.781515 6 log.go:172] (0xc003f0f970) Reply frame received for 1 I1122 22:25:02.781549 6 log.go:172] (0xc003f0f970) (0xc002b8b7c0) Create stream I1122 22:25:02.781561 6 log.go:172] (0xc003f0f970) (0xc002b8b7c0) Stream added, broadcasting: 3 I1122 22:25:02.782383 6 log.go:172] (0xc003f0f970) Reply frame received for 3 I1122 22:25:02.782411 6 log.go:172] (0xc003f0f970) (0xc002f22500) Create stream I1122 22:25:02.782422 6 log.go:172] (0xc003f0f970) (0xc002f22500) Stream added, broadcasting: 5 I1122 22:25:02.783057 6 log.go:172] (0xc003f0f970) Reply frame received for 5 I1122 22:25:02.837806 6 log.go:172] (0xc003f0f970) Data frame received for 5 I1122 22:25:02.837863 6 log.go:172] (0xc002f22500) (5) Data frame handling I1122 22:25:02.837896 6 log.go:172] (0xc003f0f970) Data frame received for 3 I1122 22:25:02.837915 6 log.go:172] (0xc002b8b7c0) (3) Data frame handling I1122 22:25:02.837947 6 log.go:172] (0xc002b8b7c0) (3) Data frame sent I1122 22:25:02.837992 6 log.go:172] (0xc003f0f970) Data frame received for 3 I1122 22:25:02.838013 6 log.go:172] (0xc002b8b7c0) (3) Data frame handling I1122 22:25:02.839412 6 log.go:172] (0xc003f0f970) Data frame received for 1 I1122 22:25:02.839466 6 log.go:172] (0xc0017c97c0) (1) Data frame handling I1122 22:25:02.839509 6 log.go:172] (0xc0017c97c0) (1) Data frame sent I1122 22:25:02.839533 6 log.go:172] (0xc003f0f970) (0xc0017c97c0) Stream removed, broadcasting: 1 I1122 22:25:02.839556 6 log.go:172] (0xc003f0f970) Go away received I1122 22:25:02.839675 6 log.go:172] (0xc003f0f970) (0xc0017c97c0) Stream removed, broadcasting: 1 I1122 22:25:02.839695 6 log.go:172] (0xc003f0f970) (0xc002b8b7c0) Stream removed, broadcasting: 3 I1122 22:25:02.839704 6 log.go:172] (0xc003f0f970) (0xc002f22500) Stream removed, broadcasting: 5 Nov 22 22:25:02.839: INFO: Exec stderr: "" Nov 22 22:25:02.839: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:25:02.839: INFO: >>> kubeConfig: /root/.kube/config I1122 22:25:02.872586 6 log.go:172] (0xc001f2a6e0) (0xc002b8bae0) Create stream I1122 22:25:02.872611 6 log.go:172] (0xc001f2a6e0) (0xc002b8bae0) Stream added, broadcasting: 1 I1122 22:25:02.875093 6 log.go:172] (0xc001f2a6e0) Reply frame received for 1 I1122 22:25:02.875117 6 log.go:172] (0xc001f2a6e0) (0xc0017c9860) Create stream I1122 22:25:02.875124 6 log.go:172] (0xc001f2a6e0) (0xc0017c9860) Stream added, broadcasting: 3 I1122 22:25:02.876187 6 log.go:172] (0xc001f2a6e0) Reply frame received for 3 I1122 22:25:02.876211 6 log.go:172] (0xc001f2a6e0) (0xc0017c99a0) Create stream I1122 22:25:02.876217 6 log.go:172] (0xc001f2a6e0) (0xc0017c99a0) Stream added, broadcasting: 5 I1122 22:25:02.877332 6 log.go:172] (0xc001f2a6e0) Reply frame received for 5 I1122 22:25:02.946225 6 log.go:172] (0xc001f2a6e0) Data frame received for 5 I1122 22:25:02.946276 6 log.go:172] (0xc0017c99a0) (5) Data frame handling I1122 22:25:02.946301 6 log.go:172] (0xc001f2a6e0) Data frame received for 3 I1122 22:25:02.946312 6 log.go:172] (0xc0017c9860) (3) Data frame handling I1122 22:25:02.946319 6 log.go:172] (0xc0017c9860) (3) Data frame sent I1122 22:25:02.946325 6 log.go:172] (0xc001f2a6e0) Data frame received for 3 I1122 22:25:02.946332 6 log.go:172] (0xc0017c9860) (3) Data frame handling I1122 22:25:02.947953 6 log.go:172] (0xc001f2a6e0) Data frame received for 1 I1122 22:25:02.947980 6 log.go:172] (0xc002b8bae0) (1) Data frame handling I1122 22:25:02.947995 6 log.go:172] (0xc002b8bae0) (1) Data frame sent I1122 22:25:02.948023 6 log.go:172] (0xc001f2a6e0) (0xc002b8bae0) Stream removed, broadcasting: 1 I1122 22:25:02.948067 6 log.go:172] (0xc001f2a6e0) Go away received I1122 22:25:02.948146 6 log.go:172] (0xc001f2a6e0) (0xc002b8bae0) Stream removed, broadcasting: 1 I1122 22:25:02.948168 6 log.go:172] (0xc001f2a6e0) (0xc0017c9860) Stream removed, broadcasting: 3 I1122 22:25:02.948180 6 log.go:172] (0xc001f2a6e0) (0xc0017c99a0) Stream removed, broadcasting: 5 Nov 22 22:25:02.948: INFO: Exec stderr: "" Nov 22 22:25:02.948: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:25:02.948: INFO: >>> kubeConfig: /root/.kube/config I1122 22:25:02.979410 6 log.go:172] (0xc001f2b760) (0xc002b8be00) Create stream I1122 22:25:02.979436 6 log.go:172] (0xc001f2b760) (0xc002b8be00) Stream added, broadcasting: 1 I1122 22:25:02.982475 6 log.go:172] (0xc001f2b760) Reply frame received for 1 I1122 22:25:02.982521 6 log.go:172] (0xc001f2b760) (0xc0014baaa0) Create stream I1122 22:25:02.982535 6 log.go:172] (0xc001f2b760) (0xc0014baaa0) Stream added, broadcasting: 3 I1122 22:25:02.983479 6 log.go:172] (0xc001f2b760) Reply frame received for 3 I1122 22:25:02.983522 6 log.go:172] (0xc001f2b760) (0xc0017c9a40) Create stream I1122 22:25:02.983549 6 log.go:172] (0xc001f2b760) (0xc0017c9a40) Stream added, broadcasting: 5 I1122 22:25:02.984473 6 log.go:172] (0xc001f2b760) Reply frame received for 5 I1122 22:25:03.028420 6 log.go:172] (0xc001f2b760) Data frame received for 5 I1122 22:25:03.028466 6 log.go:172] (0xc0017c9a40) (5) Data frame handling I1122 22:25:03.028488 6 log.go:172] (0xc001f2b760) Data frame received for 3 I1122 22:25:03.028501 6 log.go:172] (0xc0014baaa0) (3) Data frame handling I1122 22:25:03.028517 6 log.go:172] (0xc0014baaa0) (3) Data frame sent I1122 22:25:03.028529 6 log.go:172] (0xc001f2b760) Data frame received for 3 I1122 22:25:03.028540 6 log.go:172] (0xc0014baaa0) (3) Data frame handling I1122 22:25:03.030323 6 log.go:172] (0xc001f2b760) Data frame received for 1 I1122 22:25:03.030361 6 log.go:172] (0xc002b8be00) (1) Data frame handling I1122 22:25:03.030380 6 log.go:172] (0xc002b8be00) (1) Data frame sent I1122 22:25:03.030398 6 log.go:172] (0xc001f2b760) (0xc002b8be00) Stream removed, broadcasting: 1 I1122 22:25:03.030414 6 log.go:172] (0xc001f2b760) Go away received I1122 22:25:03.030574 6 log.go:172] (0xc001f2b760) (0xc002b8be00) Stream removed, broadcasting: 1 I1122 22:25:03.030603 6 log.go:172] (0xc001f2b760) (0xc0014baaa0) Stream removed, broadcasting: 3 I1122 22:25:03.030626 6 log.go:172] (0xc001f2b760) (0xc0017c9a40) Stream removed, broadcasting: 5 Nov 22 22:25:03.030: INFO: Exec stderr: "" Nov 22 22:25:03.030: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-3784 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:25:03.030: INFO: >>> kubeConfig: /root/.kube/config I1122 22:25:03.062893 6 log.go:172] (0xc00237c0b0) (0xc00057d5e0) Create stream I1122 22:25:03.062919 6 log.go:172] (0xc00237c0b0) (0xc00057d5e0) Stream added, broadcasting: 1 I1122 22:25:03.065945 6 log.go:172] (0xc00237c0b0) Reply frame received for 1 I1122 22:25:03.065991 6 log.go:172] (0xc00237c0b0) (0xc002b8bea0) Create stream I1122 22:25:03.066008 6 log.go:172] (0xc00237c0b0) (0xc002b8bea0) Stream added, broadcasting: 3 I1122 22:25:03.067119 6 log.go:172] (0xc00237c0b0) Reply frame received for 3 I1122 22:25:03.067169 6 log.go:172] (0xc00237c0b0) (0xc002b8bf40) Create stream I1122 22:25:03.067201 6 log.go:172] (0xc00237c0b0) (0xc002b8bf40) Stream added, broadcasting: 5 I1122 22:25:03.068307 6 log.go:172] (0xc00237c0b0) Reply frame received for 5 I1122 22:25:03.144091 6 log.go:172] (0xc00237c0b0) Data frame received for 3 I1122 22:25:03.144128 6 log.go:172] (0xc002b8bea0) (3) Data frame handling I1122 22:25:03.144135 6 log.go:172] (0xc002b8bea0) (3) Data frame sent I1122 22:25:03.144142 6 log.go:172] (0xc00237c0b0) Data frame received for 3 I1122 22:25:03.144148 6 log.go:172] (0xc002b8bea0) (3) Data frame handling I1122 22:25:03.144180 6 log.go:172] (0xc00237c0b0) Data frame received for 5 I1122 22:25:03.144191 6 log.go:172] (0xc002b8bf40) (5) Data frame handling I1122 22:25:03.145989 6 log.go:172] (0xc00237c0b0) Data frame received for 1 I1122 22:25:03.146007 6 log.go:172] (0xc00057d5e0) (1) Data frame handling I1122 22:25:03.146023 6 log.go:172] (0xc00057d5e0) (1) Data frame sent I1122 22:25:03.146034 6 log.go:172] (0xc00237c0b0) (0xc00057d5e0) Stream removed, broadcasting: 1 I1122 22:25:03.146126 6 log.go:172] (0xc00237c0b0) (0xc00057d5e0) Stream removed, broadcasting: 1 I1122 22:25:03.146140 6 log.go:172] (0xc00237c0b0) (0xc002b8bea0) Stream removed, broadcasting: 3 I1122 22:25:03.146149 6 log.go:172] (0xc00237c0b0) Go away received I1122 22:25:03.146169 6 log.go:172] (0xc00237c0b0) (0xc002b8bf40) Stream removed, broadcasting: 5 Nov 22 22:25:03.146: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:25:03.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-3784" for this suite. Nov 22 22:25:59.169: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:25:59.243: INFO: namespace e2e-kubelet-etc-hosts-3784 deletion completed in 56.092213533s • [SLOW TEST:67.299 seconds] [k8s.io] KubeletManagedEtcHosts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:25:59.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 22 22:25:59.325: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ab9578bf-6416-4aab-a6b2-31b7fc47c8b2" in namespace "projected-7770" to be "success or failure" Nov 22 22:25:59.362: INFO: Pod "downwardapi-volume-ab9578bf-6416-4aab-a6b2-31b7fc47c8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 36.678296ms Nov 22 22:26:01.425: INFO: Pod "downwardapi-volume-ab9578bf-6416-4aab-a6b2-31b7fc47c8b2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100223991s Nov 22 22:26:03.430: INFO: Pod "downwardapi-volume-ab9578bf-6416-4aab-a6b2-31b7fc47c8b2": Phase="Running", Reason="", readiness=true. Elapsed: 4.104542186s Nov 22 22:26:05.433: INFO: Pod "downwardapi-volume-ab9578bf-6416-4aab-a6b2-31b7fc47c8b2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.107556421s STEP: Saw pod success Nov 22 22:26:05.433: INFO: Pod "downwardapi-volume-ab9578bf-6416-4aab-a6b2-31b7fc47c8b2" satisfied condition "success or failure" Nov 22 22:26:05.434: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ab9578bf-6416-4aab-a6b2-31b7fc47c8b2 container client-container: STEP: delete the pod Nov 22 22:26:05.454: INFO: Waiting for pod downwardapi-volume-ab9578bf-6416-4aab-a6b2-31b7fc47c8b2 to disappear Nov 22 22:26:05.460: INFO: Pod downwardapi-volume-ab9578bf-6416-4aab-a6b2-31b7fc47c8b2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:26:05.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7770" for this suite. Nov 22 22:26:11.623: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:26:11.713: INFO: namespace projected-7770 deletion completed in 6.248755561s • [SLOW TEST:12.470 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:26:11.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4555.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-4555.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4555.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-4555.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-4555.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4555.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 22 22:26:18.018: INFO: DNS probes using dns-4555/dns-test-9ad34002-7b08-451e-99dc-13ff5d6b781b succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:26:18.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4555" for this suite. Nov 22 22:26:24.458: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:26:24.527: INFO: namespace dns-4555 deletion completed in 6.395396191s • [SLOW TEST:12.814 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:26:24.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W1122 22:26:34.645523 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 22 22:26:34.645: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:26:34.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6199" for this suite. Nov 22 22:26:40.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:26:40.730: INFO: namespace gc-6199 deletion completed in 6.08233802s • [SLOW TEST:16.203 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:26:40.731: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 22 22:26:40.817: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a27fb985-3df2-4098-811b-e9c201b2f3fc" in namespace "downward-api-5123" to be "success or failure" Nov 22 22:26:40.822: INFO: Pod "downwardapi-volume-a27fb985-3df2-4098-811b-e9c201b2f3fc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.595355ms Nov 22 22:26:42.826: INFO: Pod "downwardapi-volume-a27fb985-3df2-4098-811b-e9c201b2f3fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008215565s Nov 22 22:26:44.830: INFO: Pod "downwardapi-volume-a27fb985-3df2-4098-811b-e9c201b2f3fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012617512s STEP: Saw pod success Nov 22 22:26:44.830: INFO: Pod "downwardapi-volume-a27fb985-3df2-4098-811b-e9c201b2f3fc" satisfied condition "success or failure" Nov 22 22:26:44.833: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a27fb985-3df2-4098-811b-e9c201b2f3fc container client-container: STEP: delete the pod Nov 22 22:26:44.924: INFO: Waiting for pod downwardapi-volume-a27fb985-3df2-4098-811b-e9c201b2f3fc to disappear Nov 22 22:26:44.959: INFO: Pod downwardapi-volume-a27fb985-3df2-4098-811b-e9c201b2f3fc no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:26:44.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5123" for this suite. Nov 22 22:26:50.982: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:26:51.048: INFO: namespace downward-api-5123 deletion completed in 6.084948998s • [SLOW TEST:10.318 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:26:51.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-1411 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1411 to expose endpoints map[] Nov 22 22:26:51.276: INFO: successfully validated that service endpoint-test2 in namespace services-1411 exposes endpoints map[] (89.816237ms elapsed) STEP: Creating pod pod1 in namespace services-1411 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1411 to expose endpoints map[pod1:[80]] Nov 22 22:26:55.431: INFO: successfully validated that service endpoint-test2 in namespace services-1411 exposes endpoints map[pod1:[80]] (4.149100472s elapsed) STEP: Creating pod pod2 in namespace services-1411 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1411 to expose endpoints map[pod1:[80] pod2:[80]] Nov 22 22:26:58.534: INFO: successfully validated that service endpoint-test2 in namespace services-1411 exposes endpoints map[pod1:[80] pod2:[80]] (3.099890383s elapsed) STEP: Deleting pod pod1 in namespace services-1411 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1411 to expose endpoints map[pod2:[80]] Nov 22 22:26:59.577: INFO: successfully validated that service endpoint-test2 in namespace services-1411 exposes endpoints map[pod2:[80]] (1.039171034s elapsed) STEP: Deleting pod pod2 in namespace services-1411 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-1411 to expose endpoints map[] Nov 22 22:27:00.719: INFO: successfully validated that service endpoint-test2 in namespace services-1411 exposes endpoints map[] (1.137135255s elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:27:00.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1411" for this suite. Nov 22 22:27:07.000: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:27:07.097: INFO: namespace services-1411 deletion completed in 6.184858015s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:16.048 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:27:07.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 22 22:27:07.597: INFO: Creating ReplicaSet my-hostname-basic-ae1f4c10-ae03-4d8c-9665-c50e230b4405 Nov 22 22:27:07.605: INFO: Pod name my-hostname-basic-ae1f4c10-ae03-4d8c-9665-c50e230b4405: Found 0 pods out of 1 Nov 22 22:27:12.610: INFO: Pod name my-hostname-basic-ae1f4c10-ae03-4d8c-9665-c50e230b4405: Found 1 pods out of 1 Nov 22 22:27:12.610: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ae1f4c10-ae03-4d8c-9665-c50e230b4405" is running Nov 22 22:27:12.613: INFO: Pod "my-hostname-basic-ae1f4c10-ae03-4d8c-9665-c50e230b4405-rqzs4" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-22 22:27:07 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-22 22:27:10 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-22 22:27:10 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-22 22:27:07 +0000 UTC Reason: Message:}]) Nov 22 22:27:12.614: INFO: Trying to dial the pod Nov 22 22:27:17.627: INFO: Controller my-hostname-basic-ae1f4c10-ae03-4d8c-9665-c50e230b4405: Got expected result from replica 1 [my-hostname-basic-ae1f4c10-ae03-4d8c-9665-c50e230b4405-rqzs4]: "my-hostname-basic-ae1f4c10-ae03-4d8c-9665-c50e230b4405-rqzs4", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:27:17.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-313" for this suite. Nov 22 22:27:23.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:27:23.763: INFO: namespace replicaset-313 deletion completed in 6.132012524s • [SLOW TEST:16.666 seconds] [sig-apps] ReplicaSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:27:23.765: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-7c489c95-dc37-47d2-ba48-4dc886531966 STEP: Creating secret with name s-test-opt-upd-555d3135-d198-4962-8d4d-20d6a304bea0 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-7c489c95-dc37-47d2-ba48-4dc886531966 STEP: Updating secret s-test-opt-upd-555d3135-d198-4962-8d4d-20d6a304bea0 STEP: Creating secret with name s-test-opt-create-90b856c6-4bbc-447e-bf48-80ee63521b74 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:27:34.022: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7355" for this suite. Nov 22 22:27:58.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:27:58.129: INFO: namespace projected-7355 deletion completed in 24.102857961s • [SLOW TEST:34.364 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:27:58.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Nov 22 22:28:06.291: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 22 22:28:06.315: INFO: Pod pod-with-prestop-http-hook still exists Nov 22 22:28:08.315: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 22 22:28:08.319: INFO: Pod pod-with-prestop-http-hook still exists Nov 22 22:28:10.315: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 22 22:28:10.319: INFO: Pod pod-with-prestop-http-hook still exists Nov 22 22:28:12.315: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 22 22:28:12.319: INFO: Pod pod-with-prestop-http-hook still exists Nov 22 22:28:14.315: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 22 22:28:14.318: INFO: Pod pod-with-prestop-http-hook still exists Nov 22 22:28:16.315: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Nov 22 22:28:16.319: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:28:16.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2646" for this suite. Nov 22 22:28:38.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:28:38.433: INFO: namespace container-lifecycle-hook-2646 deletion completed in 22.102364226s • [SLOW TEST:40.304 seconds] [k8s.io] Container Lifecycle Hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:28:38.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Nov 22 22:28:43.034: INFO: Successfully updated pod "pod-update-activedeadlineseconds-101ad416-8544-49ca-870a-4af4180c8f66" Nov 22 22:28:43.034: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-101ad416-8544-49ca-870a-4af4180c8f66" in namespace "pods-3258" to be "terminated due to deadline exceeded" Nov 22 22:28:43.040: INFO: Pod "pod-update-activedeadlineseconds-101ad416-8544-49ca-870a-4af4180c8f66": Phase="Running", Reason="", readiness=true. Elapsed: 6.02813ms Nov 22 22:28:45.199: INFO: Pod "pod-update-activedeadlineseconds-101ad416-8544-49ca-870a-4af4180c8f66": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.165166995s Nov 22 22:28:45.199: INFO: Pod "pod-update-activedeadlineseconds-101ad416-8544-49ca-870a-4af4180c8f66" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:28:45.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3258" for this suite. Nov 22 22:28:51.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:28:51.284: INFO: namespace pods-3258 deletion completed in 6.081599372s • [SLOW TEST:12.851 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:28:51.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Nov 22 22:28:51.375: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix638356291/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:28:51.441: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2050" for this suite. Nov 22 22:28:57.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:28:57.574: INFO: namespace kubectl-2050 deletion completed in 6.127506319s • [SLOW TEST:6.289 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:28:57.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-8200 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8200 STEP: Deleting pre-stop pod Nov 22 22:29:10.690: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:29:10.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8200" for this suite. Nov 22 22:29:48.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:29:48.809: INFO: namespace prestop-8200 deletion completed in 38.108763857s • [SLOW TEST:51.233 seconds] [k8s.io] [sig-node] PreStop /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:29:48.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-3b66e00f-08b9-4432-9db0-c063aa52efde STEP: Creating a pod to test consume secrets Nov 22 22:29:48.944: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e88a832e-21a7-42b1-a5d6-052f3508f366" in namespace "projected-9260" to be "success or failure" Nov 22 22:29:48.990: INFO: Pod "pod-projected-secrets-e88a832e-21a7-42b1-a5d6-052f3508f366": Phase="Pending", Reason="", readiness=false. Elapsed: 45.202194ms Nov 22 22:29:51.254: INFO: Pod "pod-projected-secrets-e88a832e-21a7-42b1-a5d6-052f3508f366": Phase="Pending", Reason="", readiness=false. Elapsed: 2.309718747s Nov 22 22:29:53.259: INFO: Pod "pod-projected-secrets-e88a832e-21a7-42b1-a5d6-052f3508f366": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314508146s Nov 22 22:29:55.263: INFO: Pod "pod-projected-secrets-e88a832e-21a7-42b1-a5d6-052f3508f366": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.318771219s STEP: Saw pod success Nov 22 22:29:55.263: INFO: Pod "pod-projected-secrets-e88a832e-21a7-42b1-a5d6-052f3508f366" satisfied condition "success or failure" Nov 22 22:29:55.267: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-e88a832e-21a7-42b1-a5d6-052f3508f366 container projected-secret-volume-test: STEP: delete the pod Nov 22 22:29:55.301: INFO: Waiting for pod pod-projected-secrets-e88a832e-21a7-42b1-a5d6-052f3508f366 to disappear Nov 22 22:29:55.373: INFO: Pod pod-projected-secrets-e88a832e-21a7-42b1-a5d6-052f3508f366 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:29:55.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9260" for this suite. Nov 22 22:30:01.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:30:01.475: INFO: namespace projected-9260 deletion completed in 6.097661464s • [SLOW TEST:12.667 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:30:01.476: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Nov 22 22:30:01.622: INFO: Pod name pod-release: Found 0 pods out of 1 Nov 22 22:30:06.626: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:30:07.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5308" for this suite. Nov 22 22:30:13.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:30:13.742: INFO: namespace replication-controller-5308 deletion completed in 6.080840938s • [SLOW TEST:12.266 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:30:13.743: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 22 22:30:18.085: INFO: Waiting up to 5m0s for pod "client-envvars-c69572ae-afef-4d04-a686-f828c6998a87" in namespace "pods-1748" to be "success or failure" Nov 22 22:30:18.530: INFO: Pod "client-envvars-c69572ae-afef-4d04-a686-f828c6998a87": Phase="Pending", Reason="", readiness=false. Elapsed: 445.335496ms Nov 22 22:30:20.534: INFO: Pod "client-envvars-c69572ae-afef-4d04-a686-f828c6998a87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.449491424s Nov 22 22:30:22.539: INFO: Pod "client-envvars-c69572ae-afef-4d04-a686-f828c6998a87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.454126494s STEP: Saw pod success Nov 22 22:30:22.539: INFO: Pod "client-envvars-c69572ae-afef-4d04-a686-f828c6998a87" satisfied condition "success or failure" Nov 22 22:30:22.542: INFO: Trying to get logs from node iruya-worker pod client-envvars-c69572ae-afef-4d04-a686-f828c6998a87 container env3cont: STEP: delete the pod Nov 22 22:30:22.588: INFO: Waiting for pod client-envvars-c69572ae-afef-4d04-a686-f828c6998a87 to disappear Nov 22 22:30:22.625: INFO: Pod client-envvars-c69572ae-afef-4d04-a686-f828c6998a87 no longer exists [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:30:22.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1748" for this suite. Nov 22 22:31:06.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:31:06.732: INFO: namespace pods-1748 deletion completed in 44.103048007s • [SLOW TEST:52.990 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:31:06.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 22 22:31:06.807: INFO: Waiting up to 5m0s for pod "downwardapi-volume-02001ced-6214-4740-9099-16e06b12c5bd" in namespace "downward-api-5987" to be "success or failure" Nov 22 22:31:06.810: INFO: Pod "downwardapi-volume-02001ced-6214-4740-9099-16e06b12c5bd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.78466ms Nov 22 22:31:08.815: INFO: Pod "downwardapi-volume-02001ced-6214-4740-9099-16e06b12c5bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008198216s Nov 22 22:31:10.818: INFO: Pod "downwardapi-volume-02001ced-6214-4740-9099-16e06b12c5bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011597193s STEP: Saw pod success Nov 22 22:31:10.818: INFO: Pod "downwardapi-volume-02001ced-6214-4740-9099-16e06b12c5bd" satisfied condition "success or failure" Nov 22 22:31:10.820: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-02001ced-6214-4740-9099-16e06b12c5bd container client-container: STEP: delete the pod Nov 22 22:31:10.842: INFO: Waiting for pod downwardapi-volume-02001ced-6214-4740-9099-16e06b12c5bd to disappear Nov 22 22:31:10.859: INFO: Pod downwardapi-volume-02001ced-6214-4740-9099-16e06b12c5bd no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:31:10.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5987" for this suite. Nov 22 22:31:16.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:31:16.953: INFO: namespace downward-api-5987 deletion completed in 6.091425932s • [SLOW TEST:10.221 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:31:16.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 22 22:31:17.002: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:31:18.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9284" for this suite. Nov 22 22:31:24.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:31:24.233: INFO: namespace custom-resource-definition-9284 deletion completed in 6.10096717s • [SLOW TEST:7.280 seconds] [sig-api-machinery] CustomResourceDefinition resources /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:31:24.234: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-ed82094e-8a46-4df6-a070-b17b2b9b3104 in namespace container-probe-5617 Nov 22 22:31:28.340: INFO: Started pod liveness-ed82094e-8a46-4df6-a070-b17b2b9b3104 in namespace container-probe-5617 STEP: checking the pod's current state and verifying that restartCount is present Nov 22 22:31:28.343: INFO: Initial restart count of pod liveness-ed82094e-8a46-4df6-a070-b17b2b9b3104 is 0 Nov 22 22:31:44.378: INFO: Restart count of pod container-probe-5617/liveness-ed82094e-8a46-4df6-a070-b17b2b9b3104 is now 1 (16.035581101s elapsed) Nov 22 22:32:04.421: INFO: Restart count of pod container-probe-5617/liveness-ed82094e-8a46-4df6-a070-b17b2b9b3104 is now 2 (36.078490731s elapsed) Nov 22 22:32:24.571: INFO: Restart count of pod container-probe-5617/liveness-ed82094e-8a46-4df6-a070-b17b2b9b3104 is now 3 (56.227848884s elapsed) Nov 22 22:32:44.624: INFO: Restart count of pod container-probe-5617/liveness-ed82094e-8a46-4df6-a070-b17b2b9b3104 is now 4 (1m16.281165439s elapsed) Nov 22 22:33:52.858: INFO: Restart count of pod container-probe-5617/liveness-ed82094e-8a46-4df6-a070-b17b2b9b3104 is now 5 (2m24.515043639s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:33:52.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5617" for this suite. Nov 22 22:33:58.935: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:33:59.048: INFO: namespace container-probe-5617 deletion completed in 6.12727487s • [SLOW TEST:154.815 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:33:59.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-cff7bd01-779b-423d-86b5-fe39be64870d STEP: Creating a pod to test consume secrets Nov 22 22:33:59.124: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f73bf055-6ecb-477b-a7d5-e3d60742e048" in namespace "projected-3791" to be "success or failure" Nov 22 22:33:59.129: INFO: Pod "pod-projected-secrets-f73bf055-6ecb-477b-a7d5-e3d60742e048": Phase="Pending", Reason="", readiness=false. Elapsed: 5.753665ms Nov 22 22:34:01.134: INFO: Pod "pod-projected-secrets-f73bf055-6ecb-477b-a7d5-e3d60742e048": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009816767s Nov 22 22:34:03.138: INFO: Pod "pod-projected-secrets-f73bf055-6ecb-477b-a7d5-e3d60742e048": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014164508s STEP: Saw pod success Nov 22 22:34:03.138: INFO: Pod "pod-projected-secrets-f73bf055-6ecb-477b-a7d5-e3d60742e048" satisfied condition "success or failure" Nov 22 22:34:03.141: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-f73bf055-6ecb-477b-a7d5-e3d60742e048 container projected-secret-volume-test: STEP: delete the pod Nov 22 22:34:03.157: INFO: Waiting for pod pod-projected-secrets-f73bf055-6ecb-477b-a7d5-e3d60742e048 to disappear Nov 22 22:34:03.180: INFO: Pod pod-projected-secrets-f73bf055-6ecb-477b-a7d5-e3d60742e048 no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:34:03.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3791" for this suite. Nov 22 22:34:09.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:34:09.289: INFO: namespace projected-3791 deletion completed in 6.104710591s • [SLOW TEST:10.240 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:34:09.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4776.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4776.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4776.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4776.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 22 22:34:15.405: INFO: File jessie_udp@dns-test-service-3.dns-4776.svc.cluster.local from pod dns-4776/dns-test-5704ae45-bcb6-445f-83c1-9c6dc30c1554 contains '' instead of 'foo.example.com.' Nov 22 22:34:15.405: INFO: Lookups using dns-4776/dns-test-5704ae45-bcb6-445f-83c1-9c6dc30c1554 failed for: [jessie_udp@dns-test-service-3.dns-4776.svc.cluster.local] Nov 22 22:34:20.413: INFO: DNS probes using dns-test-5704ae45-bcb6-445f-83c1-9c6dc30c1554 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4776.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-4776.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4776.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-4776.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 22 22:34:27.569: INFO: File wheezy_udp@dns-test-service-3.dns-4776.svc.cluster.local from pod dns-4776/dns-test-f88f9d50-ae1a-43b4-a3ce-9118fb07348d contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 22 22:34:27.573: INFO: File jessie_udp@dns-test-service-3.dns-4776.svc.cluster.local from pod dns-4776/dns-test-f88f9d50-ae1a-43b4-a3ce-9118fb07348d contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 22 22:34:27.573: INFO: Lookups using dns-4776/dns-test-f88f9d50-ae1a-43b4-a3ce-9118fb07348d failed for: [wheezy_udp@dns-test-service-3.dns-4776.svc.cluster.local jessie_udp@dns-test-service-3.dns-4776.svc.cluster.local] Nov 22 22:34:32.578: INFO: File wheezy_udp@dns-test-service-3.dns-4776.svc.cluster.local from pod dns-4776/dns-test-f88f9d50-ae1a-43b4-a3ce-9118fb07348d contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 22 22:34:32.582: INFO: File jessie_udp@dns-test-service-3.dns-4776.svc.cluster.local from pod dns-4776/dns-test-f88f9d50-ae1a-43b4-a3ce-9118fb07348d contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 22 22:34:32.582: INFO: Lookups using dns-4776/dns-test-f88f9d50-ae1a-43b4-a3ce-9118fb07348d failed for: [wheezy_udp@dns-test-service-3.dns-4776.svc.cluster.local jessie_udp@dns-test-service-3.dns-4776.svc.cluster.local] Nov 22 22:34:37.577: INFO: File wheezy_udp@dns-test-service-3.dns-4776.svc.cluster.local from pod dns-4776/dns-test-f88f9d50-ae1a-43b4-a3ce-9118fb07348d contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 22 22:34:37.581: INFO: File jessie_udp@dns-test-service-3.dns-4776.svc.cluster.local from pod dns-4776/dns-test-f88f9d50-ae1a-43b4-a3ce-9118fb07348d contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 22 22:34:37.581: INFO: Lookups using dns-4776/dns-test-f88f9d50-ae1a-43b4-a3ce-9118fb07348d failed for: [wheezy_udp@dns-test-service-3.dns-4776.svc.cluster.local jessie_udp@dns-test-service-3.dns-4776.svc.cluster.local] Nov 22 22:34:42.578: INFO: File wheezy_udp@dns-test-service-3.dns-4776.svc.cluster.local from pod dns-4776/dns-test-f88f9d50-ae1a-43b4-a3ce-9118fb07348d contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 22 22:34:42.581: INFO: File jessie_udp@dns-test-service-3.dns-4776.svc.cluster.local from pod dns-4776/dns-test-f88f9d50-ae1a-43b4-a3ce-9118fb07348d contains 'foo.example.com. ' instead of 'bar.example.com.' Nov 22 22:34:42.581: INFO: Lookups using dns-4776/dns-test-f88f9d50-ae1a-43b4-a3ce-9118fb07348d failed for: [wheezy_udp@dns-test-service-3.dns-4776.svc.cluster.local jessie_udp@dns-test-service-3.dns-4776.svc.cluster.local] Nov 22 22:34:47.582: INFO: DNS probes using dns-test-f88f9d50-ae1a-43b4-a3ce-9118fb07348d succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4776.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-4776.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-4776.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-4776.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Nov 22 22:34:56.252: INFO: DNS probes using dns-test-476de0b2-9705-4ac4-810b-600f42821062 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:34:56.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4776" for this suite. Nov 22 22:35:04.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:35:04.552: INFO: namespace dns-4776 deletion completed in 8.226013186s • [SLOW TEST:55.263 seconds] [sig-network] DNS /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:35:04.553: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Nov 22 22:35:04.675: INFO: Waiting up to 5m0s for pod "pod-bde33f4f-f9e1-4afc-ba08-6d8058cc4431" in namespace "emptydir-8848" to be "success or failure" Nov 22 22:35:04.684: INFO: Pod "pod-bde33f4f-f9e1-4afc-ba08-6d8058cc4431": Phase="Pending", Reason="", readiness=false. Elapsed: 9.109623ms Nov 22 22:35:06.688: INFO: Pod "pod-bde33f4f-f9e1-4afc-ba08-6d8058cc4431": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012811099s Nov 22 22:35:08.693: INFO: Pod "pod-bde33f4f-f9e1-4afc-ba08-6d8058cc4431": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018732576s STEP: Saw pod success Nov 22 22:35:08.694: INFO: Pod "pod-bde33f4f-f9e1-4afc-ba08-6d8058cc4431" satisfied condition "success or failure" Nov 22 22:35:08.697: INFO: Trying to get logs from node iruya-worker pod pod-bde33f4f-f9e1-4afc-ba08-6d8058cc4431 container test-container: STEP: delete the pod Nov 22 22:35:08.716: INFO: Waiting for pod pod-bde33f4f-f9e1-4afc-ba08-6d8058cc4431 to disappear Nov 22 22:35:08.741: INFO: Pod pod-bde33f4f-f9e1-4afc-ba08-6d8058cc4431 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:35:08.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8848" for this suite. Nov 22 22:35:14.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:35:14.836: INFO: namespace emptydir-8848 deletion completed in 6.092108462s • [SLOW TEST:10.283 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:35:14.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Nov 22 22:35:14.920: INFO: Waiting up to 5m0s for pod "pod-79c5f166-3a87-4b8a-8337-3d12f54fe5e8" in namespace "emptydir-1020" to be "success or failure" Nov 22 22:35:14.942: INFO: Pod "pod-79c5f166-3a87-4b8a-8337-3d12f54fe5e8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.846717ms Nov 22 22:35:16.946: INFO: Pod "pod-79c5f166-3a87-4b8a-8337-3d12f54fe5e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02628733s Nov 22 22:35:18.993: INFO: Pod "pod-79c5f166-3a87-4b8a-8337-3d12f54fe5e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072834922s Nov 22 22:35:20.997: INFO: Pod "pod-79c5f166-3a87-4b8a-8337-3d12f54fe5e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.077107099s STEP: Saw pod success Nov 22 22:35:20.997: INFO: Pod "pod-79c5f166-3a87-4b8a-8337-3d12f54fe5e8" satisfied condition "success or failure" Nov 22 22:35:21.000: INFO: Trying to get logs from node iruya-worker2 pod pod-79c5f166-3a87-4b8a-8337-3d12f54fe5e8 container test-container: STEP: delete the pod Nov 22 22:35:21.036: INFO: Waiting for pod pod-79c5f166-3a87-4b8a-8337-3d12f54fe5e8 to disappear Nov 22 22:35:21.041: INFO: Pod pod-79c5f166-3a87-4b8a-8337-3d12f54fe5e8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:35:21.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1020" for this suite. Nov 22 22:35:27.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:35:27.143: INFO: namespace emptydir-1020 deletion completed in 6.099404708s • [SLOW TEST:12.307 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:35:27.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Nov 22 22:35:27.241: INFO: Waiting up to 5m0s for pod "var-expansion-f95b3e58-0aea-4afe-bea9-9c043698689d" in namespace "var-expansion-2829" to be "success or failure" Nov 22 22:35:27.264: INFO: Pod "var-expansion-f95b3e58-0aea-4afe-bea9-9c043698689d": Phase="Pending", Reason="", readiness=false. Elapsed: 23.318184ms Nov 22 22:35:29.268: INFO: Pod "var-expansion-f95b3e58-0aea-4afe-bea9-9c043698689d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027192232s Nov 22 22:35:31.273: INFO: Pod "var-expansion-f95b3e58-0aea-4afe-bea9-9c043698689d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032389406s STEP: Saw pod success Nov 22 22:35:31.273: INFO: Pod "var-expansion-f95b3e58-0aea-4afe-bea9-9c043698689d" satisfied condition "success or failure" Nov 22 22:35:31.277: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-f95b3e58-0aea-4afe-bea9-9c043698689d container dapi-container: STEP: delete the pod Nov 22 22:35:31.294: INFO: Waiting for pod var-expansion-f95b3e58-0aea-4afe-bea9-9c043698689d to disappear Nov 22 22:35:31.316: INFO: Pod var-expansion-f95b3e58-0aea-4afe-bea9-9c043698689d no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:35:31.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2829" for this suite. Nov 22 22:35:37.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:35:37.414: INFO: namespace var-expansion-2829 deletion completed in 6.093892003s • [SLOW TEST:10.270 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:35:37.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-170 [It] Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-170 STEP: Creating statefulset with conflicting port in namespace statefulset-170 STEP: Waiting until pod test-pod will start running in namespace statefulset-170 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-170 Nov 22 22:35:43.565: INFO: Observed stateful pod in namespace: statefulset-170, name: ss-0, uid: ee2e8d6d-2ea7-4e00-9264-8a765e06f4ac, status phase: Pending. Waiting for statefulset controller to delete. Nov 22 22:35:44.083: INFO: Observed stateful pod in namespace: statefulset-170, name: ss-0, uid: ee2e8d6d-2ea7-4e00-9264-8a765e06f4ac, status phase: Failed. Waiting for statefulset controller to delete. Nov 22 22:35:44.110: INFO: Observed stateful pod in namespace: statefulset-170, name: ss-0, uid: ee2e8d6d-2ea7-4e00-9264-8a765e06f4ac, status phase: Failed. Waiting for statefulset controller to delete. Nov 22 22:35:44.127: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-170 STEP: Removing pod with conflicting port in namespace statefulset-170 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-170 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Nov 22 22:35:48.211: INFO: Deleting all statefulset in ns statefulset-170 Nov 22 22:35:48.214: INFO: Scaling statefulset ss to 0 Nov 22 22:35:58.232: INFO: Waiting for statefulset status.replicas updated to 0 Nov 22 22:35:58.235: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:35:58.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-170" for this suite. Nov 22 22:36:04.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:36:04.366: INFO: namespace statefulset-170 deletion completed in 6.112130239s • [SLOW TEST:26.952 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:36:04.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-c03bee8f-2b6b-4ec6-a639-8ccf5f8f70ed STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-c03bee8f-2b6b-4ec6-a639-8ccf5f8f70ed STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:36:12.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-172" for this suite. Nov 22 22:36:34.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:36:34.677: INFO: namespace projected-172 deletion completed in 22.144903948s • [SLOW TEST:30.310 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:36:34.677: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Nov 22 22:36:34.798: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:36:34.814: INFO: Number of nodes with available pods: 0 Nov 22 22:36:34.814: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:36:35.820: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:36:35.823: INFO: Number of nodes with available pods: 0 Nov 22 22:36:35.823: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:36:36.819: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:36:37.354: INFO: Number of nodes with available pods: 0 Nov 22 22:36:37.354: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:36:37.818: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:36:37.821: INFO: Number of nodes with available pods: 0 Nov 22 22:36:37.821: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:36:38.819: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:36:38.822: INFO: Number of nodes with available pods: 1 Nov 22 22:36:38.822: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:36:39.905: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:36:40.192: INFO: Number of nodes with available pods: 2 Nov 22 22:36:40.192: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Nov 22 22:36:40.208: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:36:40.214: INFO: Number of nodes with available pods: 1 Nov 22 22:36:40.214: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:36:41.219: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:36:41.223: INFO: Number of nodes with available pods: 1 Nov 22 22:36:41.223: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:36:42.219: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:36:42.223: INFO: Number of nodes with available pods: 1 Nov 22 22:36:42.223: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:36:43.219: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:36:43.222: INFO: Number of nodes with available pods: 1 Nov 22 22:36:43.222: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:36:44.219: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:36:44.223: INFO: Number of nodes with available pods: 2 Nov 22 22:36:44.223: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3207, will wait for the garbage collector to delete the pods Nov 22 22:36:44.289: INFO: Deleting DaemonSet.extensions daemon-set took: 8.401133ms Nov 22 22:36:44.590: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.297327ms Nov 22 22:36:55.693: INFO: Number of nodes with available pods: 0 Nov 22 22:36:55.693: INFO: Number of running nodes: 0, number of available pods: 0 Nov 22 22:36:55.698: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3207/daemonsets","resourceVersion":"10978574"},"items":null} Nov 22 22:36:55.700: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3207/pods","resourceVersion":"10978574"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:36:55.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3207" for this suite. Nov 22 22:37:01.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:37:01.850: INFO: namespace daemonsets-3207 deletion completed in 6.135460025s • [SLOW TEST:27.173 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:37:01.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-e0d79204-1ebd-4a66-8ac9-df61fec8078d in namespace container-probe-345 Nov 22 22:37:05.983: INFO: Started pod busybox-e0d79204-1ebd-4a66-8ac9-df61fec8078d in namespace container-probe-345 STEP: checking the pod's current state and verifying that restartCount is present Nov 22 22:37:05.986: INFO: Initial restart count of pod busybox-e0d79204-1ebd-4a66-8ac9-df61fec8078d is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:41:06.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-345" for this suite. Nov 22 22:41:12.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:41:12.817: INFO: namespace container-probe-345 deletion completed in 6.091703155s • [SLOW TEST:250.967 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:41:12.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Nov 22 22:41:12.894: INFO: Waiting up to 5m0s for pod "pod-77c52091-9ab1-495f-9464-a3f363178c7c" in namespace "emptydir-6623" to be "success or failure" Nov 22 22:41:12.897: INFO: Pod "pod-77c52091-9ab1-495f-9464-a3f363178c7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.937591ms Nov 22 22:41:14.901: INFO: Pod "pod-77c52091-9ab1-495f-9464-a3f363178c7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007238466s Nov 22 22:41:16.905: INFO: Pod "pod-77c52091-9ab1-495f-9464-a3f363178c7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011115491s STEP: Saw pod success Nov 22 22:41:16.905: INFO: Pod "pod-77c52091-9ab1-495f-9464-a3f363178c7c" satisfied condition "success or failure" Nov 22 22:41:16.908: INFO: Trying to get logs from node iruya-worker pod pod-77c52091-9ab1-495f-9464-a3f363178c7c container test-container: STEP: delete the pod Nov 22 22:41:17.004: INFO: Waiting for pod pod-77c52091-9ab1-495f-9464-a3f363178c7c to disappear Nov 22 22:41:17.017: INFO: Pod pod-77c52091-9ab1-495f-9464-a3f363178c7c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:41:17.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6623" for this suite. Nov 22 22:41:23.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:41:23.154: INFO: namespace emptydir-6623 deletion completed in 6.105553188s • [SLOW TEST:10.336 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:41:23.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Nov 22 22:41:23.257: INFO: Waiting up to 5m0s for pod "downward-api-3010b2eb-f4fe-4504-a886-79bc60590d20" in namespace "downward-api-7993" to be "success or failure" Nov 22 22:41:23.262: INFO: Pod "downward-api-3010b2eb-f4fe-4504-a886-79bc60590d20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.349824ms Nov 22 22:41:25.265: INFO: Pod "downward-api-3010b2eb-f4fe-4504-a886-79bc60590d20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008034683s Nov 22 22:41:27.269: INFO: Pod "downward-api-3010b2eb-f4fe-4504-a886-79bc60590d20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011996689s STEP: Saw pod success Nov 22 22:41:27.269: INFO: Pod "downward-api-3010b2eb-f4fe-4504-a886-79bc60590d20" satisfied condition "success or failure" Nov 22 22:41:27.272: INFO: Trying to get logs from node iruya-worker pod downward-api-3010b2eb-f4fe-4504-a886-79bc60590d20 container dapi-container: STEP: delete the pod Nov 22 22:41:27.305: INFO: Waiting for pod downward-api-3010b2eb-f4fe-4504-a886-79bc60590d20 to disappear Nov 22 22:41:27.309: INFO: Pod downward-api-3010b2eb-f4fe-4504-a886-79bc60590d20 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:41:27.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7993" for this suite. Nov 22 22:41:33.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:41:33.402: INFO: namespace downward-api-7993 deletion completed in 6.089852462s • [SLOW TEST:10.248 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:41:33.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Nov 22 22:41:33.461: INFO: Waiting up to 5m0s for pod "pod-5c4f169f-dc2e-4d26-b646-d626ac2c8251" in namespace "emptydir-4040" to be "success or failure" Nov 22 22:41:33.480: INFO: Pod "pod-5c4f169f-dc2e-4d26-b646-d626ac2c8251": Phase="Pending", Reason="", readiness=false. Elapsed: 18.529438ms Nov 22 22:41:35.484: INFO: Pod "pod-5c4f169f-dc2e-4d26-b646-d626ac2c8251": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022265797s Nov 22 22:41:37.488: INFO: Pod "pod-5c4f169f-dc2e-4d26-b646-d626ac2c8251": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026233564s STEP: Saw pod success Nov 22 22:41:37.488: INFO: Pod "pod-5c4f169f-dc2e-4d26-b646-d626ac2c8251" satisfied condition "success or failure" Nov 22 22:41:37.491: INFO: Trying to get logs from node iruya-worker2 pod pod-5c4f169f-dc2e-4d26-b646-d626ac2c8251 container test-container: STEP: delete the pod Nov 22 22:41:37.509: INFO: Waiting for pod pod-5c4f169f-dc2e-4d26-b646-d626ac2c8251 to disappear Nov 22 22:41:37.513: INFO: Pod pod-5c4f169f-dc2e-4d26-b646-d626ac2c8251 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:41:37.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4040" for this suite. Nov 22 22:41:43.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:41:43.610: INFO: namespace emptydir-4040 deletion completed in 6.093552733s • [SLOW TEST:10.206 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:41:43.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-7267 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 22 22:41:43.697: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Nov 22 22:42:05.822: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.184 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7267 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:42:05.822: INFO: >>> kubeConfig: /root/.kube/config I1122 22:42:05.860362 6 log.go:172] (0xc000dea2c0) (0xc00144eb40) Create stream I1122 22:42:05.860386 6 log.go:172] (0xc000dea2c0) (0xc00144eb40) Stream added, broadcasting: 1 I1122 22:42:05.863135 6 log.go:172] (0xc000dea2c0) Reply frame received for 1 I1122 22:42:05.863189 6 log.go:172] (0xc000dea2c0) (0xc0025840a0) Create stream I1122 22:42:05.863203 6 log.go:172] (0xc000dea2c0) (0xc0025840a0) Stream added, broadcasting: 3 I1122 22:42:05.864181 6 log.go:172] (0xc000dea2c0) Reply frame received for 3 I1122 22:42:05.864210 6 log.go:172] (0xc000dea2c0) (0xc00144ec80) Create stream I1122 22:42:05.864219 6 log.go:172] (0xc000dea2c0) (0xc00144ec80) Stream added, broadcasting: 5 I1122 22:42:05.865292 6 log.go:172] (0xc000dea2c0) Reply frame received for 5 I1122 22:42:06.951133 6 log.go:172] (0xc000dea2c0) Data frame received for 3 I1122 22:42:06.951187 6 log.go:172] (0xc0025840a0) (3) Data frame handling I1122 22:42:06.951216 6 log.go:172] (0xc0025840a0) (3) Data frame sent I1122 22:42:06.951238 6 log.go:172] (0xc000dea2c0) Data frame received for 3 I1122 22:42:06.951257 6 log.go:172] (0xc0025840a0) (3) Data frame handling I1122 22:42:06.951383 6 log.go:172] (0xc000dea2c0) Data frame received for 5 I1122 22:42:06.951408 6 log.go:172] (0xc00144ec80) (5) Data frame handling I1122 22:42:06.953891 6 log.go:172] (0xc000dea2c0) Data frame received for 1 I1122 22:42:06.953930 6 log.go:172] (0xc00144eb40) (1) Data frame handling I1122 22:42:06.953944 6 log.go:172] (0xc00144eb40) (1) Data frame sent I1122 22:42:06.953966 6 log.go:172] (0xc000dea2c0) (0xc00144eb40) Stream removed, broadcasting: 1 I1122 22:42:06.953995 6 log.go:172] (0xc000dea2c0) Go away received I1122 22:42:06.954214 6 log.go:172] (0xc000dea2c0) (0xc00144eb40) Stream removed, broadcasting: 1 I1122 22:42:06.954259 6 log.go:172] (0xc000dea2c0) (0xc0025840a0) Stream removed, broadcasting: 3 I1122 22:42:06.954285 6 log.go:172] (0xc000dea2c0) (0xc00144ec80) Stream removed, broadcasting: 5 Nov 22 22:42:06.954: INFO: Found all expected endpoints: [netserver-0] Nov 22 22:42:06.958: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.217 8081 | grep -v '^\s*$'] Namespace:pod-network-test-7267 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:42:06.958: INFO: >>> kubeConfig: /root/.kube/config I1122 22:42:06.987469 6 log.go:172] (0xc002edc580) (0xc0015f01e0) Create stream I1122 22:42:06.987500 6 log.go:172] (0xc002edc580) (0xc0015f01e0) Stream added, broadcasting: 1 I1122 22:42:06.989579 6 log.go:172] (0xc002edc580) Reply frame received for 1 I1122 22:42:06.989624 6 log.go:172] (0xc002edc580) (0xc00284b0e0) Create stream I1122 22:42:06.989634 6 log.go:172] (0xc002edc580) (0xc00284b0e0) Stream added, broadcasting: 3 I1122 22:42:06.990426 6 log.go:172] (0xc002edc580) Reply frame received for 3 I1122 22:42:06.990459 6 log.go:172] (0xc002edc580) (0xc002750960) Create stream I1122 22:42:06.990468 6 log.go:172] (0xc002edc580) (0xc002750960) Stream added, broadcasting: 5 I1122 22:42:06.991249 6 log.go:172] (0xc002edc580) Reply frame received for 5 I1122 22:42:08.075019 6 log.go:172] (0xc002edc580) Data frame received for 3 I1122 22:42:08.075077 6 log.go:172] (0xc00284b0e0) (3) Data frame handling I1122 22:42:08.075150 6 log.go:172] (0xc00284b0e0) (3) Data frame sent I1122 22:42:08.075277 6 log.go:172] (0xc002edc580) Data frame received for 5 I1122 22:42:08.075384 6 log.go:172] (0xc002750960) (5) Data frame handling I1122 22:42:08.075481 6 log.go:172] (0xc002edc580) Data frame received for 3 I1122 22:42:08.075513 6 log.go:172] (0xc00284b0e0) (3) Data frame handling I1122 22:42:08.077718 6 log.go:172] (0xc002edc580) Data frame received for 1 I1122 22:42:08.077741 6 log.go:172] (0xc0015f01e0) (1) Data frame handling I1122 22:42:08.077760 6 log.go:172] (0xc0015f01e0) (1) Data frame sent I1122 22:42:08.077775 6 log.go:172] (0xc002edc580) (0xc0015f01e0) Stream removed, broadcasting: 1 I1122 22:42:08.077796 6 log.go:172] (0xc002edc580) Go away received I1122 22:42:08.077953 6 log.go:172] (0xc002edc580) (0xc0015f01e0) Stream removed, broadcasting: 1 I1122 22:42:08.077982 6 log.go:172] (0xc002edc580) (0xc00284b0e0) Stream removed, broadcasting: 3 I1122 22:42:08.078002 6 log.go:172] (0xc002edc580) (0xc002750960) Stream removed, broadcasting: 5 Nov 22 22:42:08.078: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:42:08.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7267" for this suite. Nov 22 22:42:32.098: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:42:32.174: INFO: namespace pod-network-test-7267 deletion completed in 24.09107397s • [SLOW TEST:48.564 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:42:32.175: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Nov 22 22:42:36.806: INFO: Successfully updated pod "labelsupdate7b01ac20-e66e-439b-a3ca-b93bbf6483cd" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:42:38.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4864" for this suite. Nov 22 22:43:00.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:43:00.931: INFO: namespace projected-4864 deletion completed in 22.084516683s • [SLOW TEST:28.757 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:43:00.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Nov 22 22:43:01.009: INFO: Waiting up to 5m0s for pod "pod-467e902f-aaf5-48cc-b076-19ba95f534e2" in namespace "emptydir-6776" to be "success or failure" Nov 22 22:43:01.013: INFO: Pod "pod-467e902f-aaf5-48cc-b076-19ba95f534e2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.546941ms Nov 22 22:43:03.027: INFO: Pod "pod-467e902f-aaf5-48cc-b076-19ba95f534e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01776169s Nov 22 22:43:05.031: INFO: Pod "pod-467e902f-aaf5-48cc-b076-19ba95f534e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022224025s STEP: Saw pod success Nov 22 22:43:05.031: INFO: Pod "pod-467e902f-aaf5-48cc-b076-19ba95f534e2" satisfied condition "success or failure" Nov 22 22:43:05.035: INFO: Trying to get logs from node iruya-worker pod pod-467e902f-aaf5-48cc-b076-19ba95f534e2 container test-container: STEP: delete the pod Nov 22 22:43:05.055: INFO: Waiting for pod pod-467e902f-aaf5-48cc-b076-19ba95f534e2 to disappear Nov 22 22:43:05.081: INFO: Pod pod-467e902f-aaf5-48cc-b076-19ba95f534e2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:43:05.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6776" for this suite. Nov 22 22:43:11.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:43:11.196: INFO: namespace emptydir-6776 deletion completed in 6.111830148s • [SLOW TEST:10.265 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:43:11.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e1b555e7-3e6e-4fc7-b439-062b74c44cef STEP: Creating a pod to test consume secrets Nov 22 22:43:11.278: INFO: Waiting up to 5m0s for pod "pod-secrets-9caeb323-4935-44be-bbcb-7bb6fcd72948" in namespace "secrets-6467" to be "success or failure" Nov 22 22:43:11.282: INFO: Pod "pod-secrets-9caeb323-4935-44be-bbcb-7bb6fcd72948": Phase="Pending", Reason="", readiness=false. Elapsed: 3.63026ms Nov 22 22:43:13.312: INFO: Pod "pod-secrets-9caeb323-4935-44be-bbcb-7bb6fcd72948": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034079041s Nov 22 22:43:15.316: INFO: Pod "pod-secrets-9caeb323-4935-44be-bbcb-7bb6fcd72948": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037721907s STEP: Saw pod success Nov 22 22:43:15.316: INFO: Pod "pod-secrets-9caeb323-4935-44be-bbcb-7bb6fcd72948" satisfied condition "success or failure" Nov 22 22:43:15.319: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-9caeb323-4935-44be-bbcb-7bb6fcd72948 container secret-volume-test: STEP: delete the pod Nov 22 22:43:15.337: INFO: Waiting for pod pod-secrets-9caeb323-4935-44be-bbcb-7bb6fcd72948 to disappear Nov 22 22:43:15.342: INFO: Pod pod-secrets-9caeb323-4935-44be-bbcb-7bb6fcd72948 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:43:15.342: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6467" for this suite. Nov 22 22:43:21.357: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:43:21.471: INFO: namespace secrets-6467 deletion completed in 6.125735918s • [SLOW TEST:10.274 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:43:21.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Nov 22 22:43:21.969: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:43:28.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6797" for this suite. Nov 22 22:43:34.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:43:34.510: INFO: namespace init-container-6797 deletion completed in 6.082824693s • [SLOW TEST:13.039 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:43:34.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-713b0d6a-8e7a-4d1b-b24f-b58de76f85be STEP: Creating a pod to test consume configMaps Nov 22 22:43:34.589: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-660b5add-327c-4289-acea-015ef479e5e2" in namespace "projected-1929" to be "success or failure" Nov 22 22:43:34.593: INFO: Pod "pod-projected-configmaps-660b5add-327c-4289-acea-015ef479e5e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08515ms Nov 22 22:43:36.597: INFO: Pod "pod-projected-configmaps-660b5add-327c-4289-acea-015ef479e5e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007995724s Nov 22 22:43:38.601: INFO: Pod "pod-projected-configmaps-660b5add-327c-4289-acea-015ef479e5e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011668531s STEP: Saw pod success Nov 22 22:43:38.601: INFO: Pod "pod-projected-configmaps-660b5add-327c-4289-acea-015ef479e5e2" satisfied condition "success or failure" Nov 22 22:43:38.603: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-660b5add-327c-4289-acea-015ef479e5e2 container projected-configmap-volume-test: STEP: delete the pod Nov 22 22:43:38.670: INFO: Waiting for pod pod-projected-configmaps-660b5add-327c-4289-acea-015ef479e5e2 to disappear Nov 22 22:43:38.677: INFO: Pod pod-projected-configmaps-660b5add-327c-4289-acea-015ef479e5e2 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:43:38.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1929" for this suite. Nov 22 22:43:44.698: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:43:44.769: INFO: namespace projected-1929 deletion completed in 6.088471042s • [SLOW TEST:10.259 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:43:44.770: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-jt56 STEP: Creating a pod to test atomic-volume-subpath Nov 22 22:43:44.884: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-jt56" in namespace "subpath-7000" to be "success or failure" Nov 22 22:43:44.900: INFO: Pod "pod-subpath-test-downwardapi-jt56": Phase="Pending", Reason="", readiness=false. Elapsed: 16.23675ms Nov 22 22:43:46.906: INFO: Pod "pod-subpath-test-downwardapi-jt56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021980349s Nov 22 22:43:48.909: INFO: Pod "pod-subpath-test-downwardapi-jt56": Phase="Running", Reason="", readiness=true. Elapsed: 4.025750843s Nov 22 22:43:50.913: INFO: Pod "pod-subpath-test-downwardapi-jt56": Phase="Running", Reason="", readiness=true. Elapsed: 6.029249928s Nov 22 22:43:52.918: INFO: Pod "pod-subpath-test-downwardapi-jt56": Phase="Running", Reason="", readiness=true. Elapsed: 8.03410799s Nov 22 22:43:54.921: INFO: Pod "pod-subpath-test-downwardapi-jt56": Phase="Running", Reason="", readiness=true. Elapsed: 10.03748616s Nov 22 22:43:56.930: INFO: Pod "pod-subpath-test-downwardapi-jt56": Phase="Running", Reason="", readiness=true. Elapsed: 12.046616557s Nov 22 22:43:58.955: INFO: Pod "pod-subpath-test-downwardapi-jt56": Phase="Running", Reason="", readiness=true. Elapsed: 14.071467847s Nov 22 22:44:00.962: INFO: Pod "pod-subpath-test-downwardapi-jt56": Phase="Running", Reason="", readiness=true. Elapsed: 16.078781988s Nov 22 22:44:02.965: INFO: Pod "pod-subpath-test-downwardapi-jt56": Phase="Running", Reason="", readiness=true. Elapsed: 18.081724031s Nov 22 22:44:04.969: INFO: Pod "pod-subpath-test-downwardapi-jt56": Phase="Running", Reason="", readiness=true. Elapsed: 20.084894936s Nov 22 22:44:06.972: INFO: Pod "pod-subpath-test-downwardapi-jt56": Phase="Running", Reason="", readiness=true. Elapsed: 22.088041726s Nov 22 22:44:08.976: INFO: Pod "pod-subpath-test-downwardapi-jt56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.092800913s STEP: Saw pod success Nov 22 22:44:08.977: INFO: Pod "pod-subpath-test-downwardapi-jt56" satisfied condition "success or failure" Nov 22 22:44:08.978: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-jt56 container test-container-subpath-downwardapi-jt56: STEP: delete the pod Nov 22 22:44:09.013: INFO: Waiting for pod pod-subpath-test-downwardapi-jt56 to disappear Nov 22 22:44:09.025: INFO: Pod pod-subpath-test-downwardapi-jt56 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-jt56 Nov 22 22:44:09.026: INFO: Deleting pod "pod-subpath-test-downwardapi-jt56" in namespace "subpath-7000" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:44:09.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7000" for this suite. Nov 22 22:44:15.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:44:15.162: INFO: namespace subpath-7000 deletion completed in 6.12992594s • [SLOW TEST:30.392 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:44:15.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-4 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4 to expose endpoints map[] Nov 22 22:44:15.326: INFO: Get endpoints failed (17.812834ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Nov 22 22:44:16.330: INFO: successfully validated that service multi-endpoint-test in namespace services-4 exposes endpoints map[] (1.021932799s elapsed) STEP: Creating pod pod1 in namespace services-4 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4 to expose endpoints map[pod1:[100]] Nov 22 22:44:20.377: INFO: successfully validated that service multi-endpoint-test in namespace services-4 exposes endpoints map[pod1:[100]] (4.039166587s elapsed) STEP: Creating pod pod2 in namespace services-4 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4 to expose endpoints map[pod1:[100] pod2:[101]] Nov 22 22:44:24.460: INFO: successfully validated that service multi-endpoint-test in namespace services-4 exposes endpoints map[pod1:[100] pod2:[101]] (4.077972896s elapsed) STEP: Deleting pod pod1 in namespace services-4 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4 to expose endpoints map[pod2:[101]] Nov 22 22:44:25.538: INFO: successfully validated that service multi-endpoint-test in namespace services-4 exposes endpoints map[pod2:[101]] (1.072465285s elapsed) STEP: Deleting pod pod2 in namespace services-4 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-4 to expose endpoints map[] Nov 22 22:44:26.570: INFO: successfully validated that service multi-endpoint-test in namespace services-4 exposes endpoints map[] (1.027553868s elapsed) [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:44:26.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4" for this suite. Nov 22 22:44:48.610: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:44:48.691: INFO: namespace services-4 deletion completed in 22.089018568s [AfterEach] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:33.528 seconds] [sig-network] Services /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:44:48.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Nov 22 22:44:48.772: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:44:57.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3264" for this suite. Nov 22 22:45:21.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:45:21.264: INFO: namespace init-container-3264 deletion completed in 24.147921983s • [SLOW TEST:32.573 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:45:21.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Nov 22 22:45:21.304: INFO: Waiting up to 5m0s for pod "pod-ce889458-a033-4cb3-87a5-47255d54e6c2" in namespace "emptydir-7345" to be "success or failure" Nov 22 22:45:21.322: INFO: Pod "pod-ce889458-a033-4cb3-87a5-47255d54e6c2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.620535ms Nov 22 22:45:23.417: INFO: Pod "pod-ce889458-a033-4cb3-87a5-47255d54e6c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113202195s Nov 22 22:45:25.422: INFO: Pod "pod-ce889458-a033-4cb3-87a5-47255d54e6c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.117678603s STEP: Saw pod success Nov 22 22:45:25.422: INFO: Pod "pod-ce889458-a033-4cb3-87a5-47255d54e6c2" satisfied condition "success or failure" Nov 22 22:45:25.425: INFO: Trying to get logs from node iruya-worker2 pod pod-ce889458-a033-4cb3-87a5-47255d54e6c2 container test-container: STEP: delete the pod Nov 22 22:45:25.447: INFO: Waiting for pod pod-ce889458-a033-4cb3-87a5-47255d54e6c2 to disappear Nov 22 22:45:25.452: INFO: Pod pod-ce889458-a033-4cb3-87a5-47255d54e6c2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:45:25.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7345" for this suite. Nov 22 22:45:31.513: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:45:31.590: INFO: namespace emptydir-7345 deletion completed in 6.134655087s • [SLOW TEST:10.325 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:45:31.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Nov 22 22:45:31.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-5334' Nov 22 22:45:34.300: INFO: stderr: "" Nov 22 22:45:34.301: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Nov 22 22:45:39.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-5334 -o json' Nov 22 22:45:39.442: INFO: stderr: "" Nov 22 22:45:39.442: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-11-22T22:45:34Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-5334\",\n \"resourceVersion\": \"10980062\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-5334/pods/e2e-test-nginx-pod\",\n \"uid\": \"890d42cb-2d64-4f4c-a5ec-894ac702275b\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-pfg8c\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-pfg8c\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-pfg8c\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-22T22:45:34Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-22T22:45:37Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-22T22:45:37Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-11-22T22:45:34Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://3dd3ba1d5379c2e5c865e980a6d49708cf1f54d6a9084eb70da7fe356d5f1965\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-11-22T22:45:36Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.18.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.225\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-11-22T22:45:34Z\"\n }\n}\n" STEP: replace the image in the pod Nov 22 22:45:39.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-5334' Nov 22 22:45:39.800: INFO: stderr: "" Nov 22 22:45:39.800: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Nov 22 22:45:39.803: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5334' Nov 22 22:45:45.665: INFO: stderr: "" Nov 22 22:45:45.665: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:45:45.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5334" for this suite. Nov 22 22:45:51.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:45:51.774: INFO: namespace kubectl-5334 deletion completed in 6.094340263s • [SLOW TEST:20.184 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:45:51.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Nov 22 22:45:56.439: INFO: Successfully updated pod "pod-update-f95d4de9-1800-400b-b268-5138ea1c3753" STEP: verifying the updated pod is in kubernetes Nov 22 22:45:56.485: INFO: Pod update OK [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:45:56.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-39" for this suite. Nov 22 22:46:18.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:46:18.578: INFO: namespace pods-39 deletion completed in 22.088833099s • [SLOW TEST:26.804 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:46:18.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-6ce67b23-cc1c-4631-80e6-570f081887b1 STEP: Creating a pod to test consume configMaps Nov 22 22:46:18.652: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-9e7d186c-301e-4835-b246-cff1ea80add3" in namespace "projected-3926" to be "success or failure" Nov 22 22:46:18.697: INFO: Pod "pod-projected-configmaps-9e7d186c-301e-4835-b246-cff1ea80add3": Phase="Pending", Reason="", readiness=false. Elapsed: 44.783642ms Nov 22 22:46:20.714: INFO: Pod "pod-projected-configmaps-9e7d186c-301e-4835-b246-cff1ea80add3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061631328s Nov 22 22:46:22.723: INFO: Pod "pod-projected-configmaps-9e7d186c-301e-4835-b246-cff1ea80add3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070535201s STEP: Saw pod success Nov 22 22:46:22.723: INFO: Pod "pod-projected-configmaps-9e7d186c-301e-4835-b246-cff1ea80add3" satisfied condition "success or failure" Nov 22 22:46:22.726: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-9e7d186c-301e-4835-b246-cff1ea80add3 container projected-configmap-volume-test: STEP: delete the pod Nov 22 22:46:22.766: INFO: Waiting for pod pod-projected-configmaps-9e7d186c-301e-4835-b246-cff1ea80add3 to disappear Nov 22 22:46:22.782: INFO: Pod pod-projected-configmaps-9e7d186c-301e-4835-b246-cff1ea80add3 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:46:22.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3926" for this suite. Nov 22 22:46:28.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:46:28.874: INFO: namespace projected-3926 deletion completed in 6.088283317s • [SLOW TEST:10.296 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:46:28.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4853 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 22 22:46:28.923: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Nov 22 22:46:53.026: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.227:8080/dial?request=hostName&protocol=http&host=10.244.2.226&port=8080&tries=1'] Namespace:pod-network-test-4853 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:46:53.026: INFO: >>> kubeConfig: /root/.kube/config I1122 22:46:53.063382 6 log.go:172] (0xc000d9efd0) (0xc001964280) Create stream I1122 22:46:53.063412 6 log.go:172] (0xc000d9efd0) (0xc001964280) Stream added, broadcasting: 1 I1122 22:46:53.065449 6 log.go:172] (0xc000d9efd0) Reply frame received for 1 I1122 22:46:53.065491 6 log.go:172] (0xc000d9efd0) (0xc000101400) Create stream I1122 22:46:53.065510 6 log.go:172] (0xc000d9efd0) (0xc000101400) Stream added, broadcasting: 3 I1122 22:46:53.066808 6 log.go:172] (0xc000d9efd0) Reply frame received for 3 I1122 22:46:53.066847 6 log.go:172] (0xc000d9efd0) (0xc001964320) Create stream I1122 22:46:53.066854 6 log.go:172] (0xc000d9efd0) (0xc001964320) Stream added, broadcasting: 5 I1122 22:46:53.068065 6 log.go:172] (0xc000d9efd0) Reply frame received for 5 I1122 22:46:53.202428 6 log.go:172] (0xc000d9efd0) Data frame received for 3 I1122 22:46:53.202473 6 log.go:172] (0xc000101400) (3) Data frame handling I1122 22:46:53.202504 6 log.go:172] (0xc000101400) (3) Data frame sent I1122 22:46:53.203364 6 log.go:172] (0xc000d9efd0) Data frame received for 3 I1122 22:46:53.203381 6 log.go:172] (0xc000101400) (3) Data frame handling I1122 22:46:53.203416 6 log.go:172] (0xc000d9efd0) Data frame received for 5 I1122 22:46:53.203496 6 log.go:172] (0xc001964320) (5) Data frame handling I1122 22:46:53.205437 6 log.go:172] (0xc000d9efd0) Data frame received for 1 I1122 22:46:53.205457 6 log.go:172] (0xc001964280) (1) Data frame handling I1122 22:46:53.205466 6 log.go:172] (0xc001964280) (1) Data frame sent I1122 22:46:53.205474 6 log.go:172] (0xc000d9efd0) (0xc001964280) Stream removed, broadcasting: 1 I1122 22:46:53.205482 6 log.go:172] (0xc000d9efd0) Go away received I1122 22:46:53.205662 6 log.go:172] (0xc000d9efd0) (0xc001964280) Stream removed, broadcasting: 1 I1122 22:46:53.205699 6 log.go:172] (0xc000d9efd0) (0xc000101400) Stream removed, broadcasting: 3 I1122 22:46:53.205724 6 log.go:172] (0xc000d9efd0) (0xc001964320) Stream removed, broadcasting: 5 Nov 22 22:46:53.205: INFO: Waiting for endpoints: map[] Nov 22 22:46:53.214: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.227:8080/dial?request=hostName&protocol=http&host=10.244.1.191&port=8080&tries=1'] Namespace:pod-network-test-4853 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:46:53.214: INFO: >>> kubeConfig: /root/.kube/config I1122 22:46:53.249454 6 log.go:172] (0xc000c26580) (0xc0017c9cc0) Create stream I1122 22:46:53.249487 6 log.go:172] (0xc000c26580) (0xc0017c9cc0) Stream added, broadcasting: 1 I1122 22:46:53.251243 6 log.go:172] (0xc000c26580) Reply frame received for 1 I1122 22:46:53.251276 6 log.go:172] (0xc000c26580) (0xc0019643c0) Create stream I1122 22:46:53.251283 6 log.go:172] (0xc000c26580) (0xc0019643c0) Stream added, broadcasting: 3 I1122 22:46:53.252404 6 log.go:172] (0xc000c26580) Reply frame received for 3 I1122 22:46:53.252445 6 log.go:172] (0xc000c26580) (0xc0017c9e00) Create stream I1122 22:46:53.252460 6 log.go:172] (0xc000c26580) (0xc0017c9e00) Stream added, broadcasting: 5 I1122 22:46:53.253812 6 log.go:172] (0xc000c26580) Reply frame received for 5 I1122 22:46:53.315657 6 log.go:172] (0xc000c26580) Data frame received for 3 I1122 22:46:53.315681 6 log.go:172] (0xc0019643c0) (3) Data frame handling I1122 22:46:53.315689 6 log.go:172] (0xc0019643c0) (3) Data frame sent I1122 22:46:53.316748 6 log.go:172] (0xc000c26580) Data frame received for 5 I1122 22:46:53.316770 6 log.go:172] (0xc0017c9e00) (5) Data frame handling I1122 22:46:53.317005 6 log.go:172] (0xc000c26580) Data frame received for 3 I1122 22:46:53.317023 6 log.go:172] (0xc0019643c0) (3) Data frame handling I1122 22:46:53.318485 6 log.go:172] (0xc000c26580) Data frame received for 1 I1122 22:46:53.318510 6 log.go:172] (0xc0017c9cc0) (1) Data frame handling I1122 22:46:53.318529 6 log.go:172] (0xc0017c9cc0) (1) Data frame sent I1122 22:46:53.318543 6 log.go:172] (0xc000c26580) (0xc0017c9cc0) Stream removed, broadcasting: 1 I1122 22:46:53.318591 6 log.go:172] (0xc000c26580) Go away received I1122 22:46:53.318649 6 log.go:172] (0xc000c26580) (0xc0017c9cc0) Stream removed, broadcasting: 1 I1122 22:46:53.318669 6 log.go:172] (0xc000c26580) (0xc0019643c0) Stream removed, broadcasting: 3 I1122 22:46:53.318697 6 log.go:172] (0xc000c26580) (0xc0017c9e00) Stream removed, broadcasting: 5 Nov 22 22:46:53.318: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:46:53.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4853" for this suite. Nov 22 22:47:17.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:47:17.417: INFO: namespace pod-network-test-4853 deletion completed in 24.094333454s • [SLOW TEST:48.543 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:47:17.418: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-3f107e13-2f11-4646-9885-9107a5f50232 STEP: Creating a pod to test consume configMaps Nov 22 22:47:17.486: INFO: Waiting up to 5m0s for pod "pod-configmaps-58ce930b-0a82-4311-9350-6a632755cd83" in namespace "configmap-8182" to be "success or failure" Nov 22 22:47:17.491: INFO: Pod "pod-configmaps-58ce930b-0a82-4311-9350-6a632755cd83": Phase="Pending", Reason="", readiness=false. Elapsed: 5.316183ms Nov 22 22:47:19.495: INFO: Pod "pod-configmaps-58ce930b-0a82-4311-9350-6a632755cd83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009241842s Nov 22 22:47:21.499: INFO: Pod "pod-configmaps-58ce930b-0a82-4311-9350-6a632755cd83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012416977s STEP: Saw pod success Nov 22 22:47:21.499: INFO: Pod "pod-configmaps-58ce930b-0a82-4311-9350-6a632755cd83" satisfied condition "success or failure" Nov 22 22:47:21.501: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-58ce930b-0a82-4311-9350-6a632755cd83 container configmap-volume-test: STEP: delete the pod Nov 22 22:47:21.516: INFO: Waiting for pod pod-configmaps-58ce930b-0a82-4311-9350-6a632755cd83 to disappear Nov 22 22:47:21.521: INFO: Pod pod-configmaps-58ce930b-0a82-4311-9350-6a632755cd83 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:47:21.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8182" for this suite. Nov 22 22:47:27.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:47:27.610: INFO: namespace configmap-8182 deletion completed in 6.08626236s • [SLOW TEST:10.192 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:47:27.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 22 22:47:27.723: INFO: Create a RollingUpdate DaemonSet Nov 22 22:47:27.727: INFO: Check that daemon pods launch on every node of the cluster Nov 22 22:47:27.731: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:47:27.736: INFO: Number of nodes with available pods: 0 Nov 22 22:47:27.736: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:47:28.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:47:28.743: INFO: Number of nodes with available pods: 0 Nov 22 22:47:28.743: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:47:29.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:47:29.744: INFO: Number of nodes with available pods: 0 Nov 22 22:47:29.744: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:47:30.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:47:30.745: INFO: Number of nodes with available pods: 0 Nov 22 22:47:30.745: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:47:31.741: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:47:31.744: INFO: Number of nodes with available pods: 0 Nov 22 22:47:31.744: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:47:32.742: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:47:32.745: INFO: Number of nodes with available pods: 2 Nov 22 22:47:32.746: INFO: Number of running nodes: 2, number of available pods: 2 Nov 22 22:47:32.746: INFO: Update the DaemonSet to trigger a rollout Nov 22 22:47:32.753: INFO: Updating DaemonSet daemon-set Nov 22 22:47:45.776: INFO: Roll back the DaemonSet before rollout is complete Nov 22 22:47:45.782: INFO: Updating DaemonSet daemon-set Nov 22 22:47:45.782: INFO: Make sure DaemonSet rollback is complete Nov 22 22:47:45.796: INFO: Wrong image for pod: daemon-set-k9725. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Nov 22 22:47:45.796: INFO: Pod daemon-set-k9725 is not available Nov 22 22:47:45.827: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:47:46.832: INFO: Wrong image for pod: daemon-set-k9725. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Nov 22 22:47:46.832: INFO: Pod daemon-set-k9725 is not available Nov 22 22:47:46.836: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:47:47.831: INFO: Wrong image for pod: daemon-set-k9725. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Nov 22 22:47:47.831: INFO: Pod daemon-set-k9725 is not available Nov 22 22:47:47.834: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:47:48.832: INFO: Pod daemon-set-pkmw4 is not available Nov 22 22:47:48.836: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2530, will wait for the garbage collector to delete the pods Nov 22 22:47:48.902: INFO: Deleting DaemonSet.extensions daemon-set took: 6.357985ms Nov 22 22:47:49.203: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.272859ms Nov 22 22:47:52.506: INFO: Number of nodes with available pods: 0 Nov 22 22:47:52.506: INFO: Number of running nodes: 0, number of available pods: 0 Nov 22 22:47:52.509: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2530/daemonsets","resourceVersion":"10980557"},"items":null} Nov 22 22:47:52.512: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2530/pods","resourceVersion":"10980557"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:47:52.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2530" for this suite. Nov 22 22:47:58.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:47:58.639: INFO: namespace daemonsets-2530 deletion completed in 6.114391133s • [SLOW TEST:31.029 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:47:58.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Nov 22 22:47:58.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-3660' Nov 22 22:47:58.808: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Nov 22 22:47:58.808: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Nov 22 22:48:02.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-3660' Nov 22 22:48:02.946: INFO: stderr: "" Nov 22 22:48:02.946: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:48:02.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3660" for this suite. Nov 22 22:48:24.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:48:25.083: INFO: namespace kubectl-3660 deletion completed in 22.129030658s • [SLOW TEST:26.443 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:48:25.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 22 22:48:25.157: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Nov 22 22:48:25.182: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:25.206: INFO: Number of nodes with available pods: 0 Nov 22 22:48:25.206: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:48:26.265: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:26.268: INFO: Number of nodes with available pods: 0 Nov 22 22:48:26.269: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:48:27.211: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:27.214: INFO: Number of nodes with available pods: 0 Nov 22 22:48:27.214: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:48:28.211: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:28.215: INFO: Number of nodes with available pods: 1 Nov 22 22:48:28.215: INFO: Node iruya-worker is running more than one daemon pod Nov 22 22:48:29.211: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:29.215: INFO: Number of nodes with available pods: 2 Nov 22 22:48:29.215: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Nov 22 22:48:29.241: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:29.241: INFO: Wrong image for pod: daemon-set-xppjm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:29.264: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:30.267: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:30.267: INFO: Wrong image for pod: daemon-set-xppjm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:30.272: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:31.267: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:31.267: INFO: Wrong image for pod: daemon-set-xppjm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:31.273: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:32.268: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:32.268: INFO: Wrong image for pod: daemon-set-xppjm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:32.268: INFO: Pod daemon-set-xppjm is not available Nov 22 22:48:32.272: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:33.268: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:33.268: INFO: Wrong image for pod: daemon-set-xppjm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:33.268: INFO: Pod daemon-set-xppjm is not available Nov 22 22:48:33.273: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:34.267: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:34.267: INFO: Wrong image for pod: daemon-set-xppjm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:34.267: INFO: Pod daemon-set-xppjm is not available Nov 22 22:48:34.272: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:35.268: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:35.268: INFO: Wrong image for pod: daemon-set-xppjm. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:35.268: INFO: Pod daemon-set-xppjm is not available Nov 22 22:48:35.272: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:36.269: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:36.269: INFO: Pod daemon-set-wfdb9 is not available Nov 22 22:48:36.274: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:37.269: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:37.269: INFO: Pod daemon-set-wfdb9 is not available Nov 22 22:48:37.273: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:38.268: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:38.268: INFO: Pod daemon-set-wfdb9 is not available Nov 22 22:48:38.272: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:39.268: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:39.272: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:40.268: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:40.268: INFO: Pod daemon-set-cjqt9 is not available Nov 22 22:48:40.271: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:41.268: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:41.268: INFO: Pod daemon-set-cjqt9 is not available Nov 22 22:48:41.272: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:42.268: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:42.268: INFO: Pod daemon-set-cjqt9 is not available Nov 22 22:48:42.273: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:43.268: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:43.268: INFO: Pod daemon-set-cjqt9 is not available Nov 22 22:48:43.272: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:44.268: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:44.268: INFO: Pod daemon-set-cjqt9 is not available Nov 22 22:48:44.272: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:45.269: INFO: Wrong image for pod: daemon-set-cjqt9. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Nov 22 22:48:45.269: INFO: Pod daemon-set-cjqt9 is not available Nov 22 22:48:45.273: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:46.268: INFO: Pod daemon-set-p2lht is not available Nov 22 22:48:46.271: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Nov 22 22:48:46.275: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:46.278: INFO: Number of nodes with available pods: 1 Nov 22 22:48:46.278: INFO: Node iruya-worker2 is running more than one daemon pod Nov 22 22:48:47.283: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:47.287: INFO: Number of nodes with available pods: 1 Nov 22 22:48:47.287: INFO: Node iruya-worker2 is running more than one daemon pod Nov 22 22:48:48.498: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:48.713: INFO: Number of nodes with available pods: 1 Nov 22 22:48:48.713: INFO: Node iruya-worker2 is running more than one daemon pod Nov 22 22:48:49.282: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:49.284: INFO: Number of nodes with available pods: 1 Nov 22 22:48:49.284: INFO: Node iruya-worker2 is running more than one daemon pod Nov 22 22:48:50.300: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Nov 22 22:48:50.304: INFO: Number of nodes with available pods: 2 Nov 22 22:48:50.304: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5473, will wait for the garbage collector to delete the pods Nov 22 22:48:50.376: INFO: Deleting DaemonSet.extensions daemon-set took: 5.847796ms Nov 22 22:48:50.676: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.232869ms Nov 22 22:48:55.401: INFO: Number of nodes with available pods: 0 Nov 22 22:48:55.401: INFO: Number of running nodes: 0, number of available pods: 0 Nov 22 22:48:55.403: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5473/daemonsets","resourceVersion":"10980827"},"items":null} Nov 22 22:48:55.406: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5473/pods","resourceVersion":"10980827"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:48:55.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5473" for this suite. Nov 22 22:49:01.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:49:01.533: INFO: namespace daemonsets-5473 deletion completed in 6.114915277s • [SLOW TEST:36.450 seconds] [sig-apps] Daemon set [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:49:01.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:49:05.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6005" for this suite. Nov 22 22:49:11.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:49:11.707: INFO: namespace kubelet-test-6005 deletion completed in 6.088040816s • [SLOW TEST:10.174 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:49:11.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Nov 22 22:49:11.772: INFO: Waiting up to 5m0s for pod "var-expansion-28c3a133-dee7-4ea2-ad3d-d5c075db9c61" in namespace "var-expansion-196" to be "success or failure" Nov 22 22:49:11.775: INFO: Pod "var-expansion-28c3a133-dee7-4ea2-ad3d-d5c075db9c61": Phase="Pending", Reason="", readiness=false. Elapsed: 3.248343ms Nov 22 22:49:13.779: INFO: Pod "var-expansion-28c3a133-dee7-4ea2-ad3d-d5c075db9c61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006947547s Nov 22 22:49:15.783: INFO: Pod "var-expansion-28c3a133-dee7-4ea2-ad3d-d5c075db9c61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010808225s STEP: Saw pod success Nov 22 22:49:15.783: INFO: Pod "var-expansion-28c3a133-dee7-4ea2-ad3d-d5c075db9c61" satisfied condition "success or failure" Nov 22 22:49:15.786: INFO: Trying to get logs from node iruya-worker pod var-expansion-28c3a133-dee7-4ea2-ad3d-d5c075db9c61 container dapi-container: STEP: delete the pod Nov 22 22:49:15.852: INFO: Waiting for pod var-expansion-28c3a133-dee7-4ea2-ad3d-d5c075db9c61 to disappear Nov 22 22:49:15.871: INFO: Pod var-expansion-28c3a133-dee7-4ea2-ad3d-d5c075db9c61 no longer exists [AfterEach] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:49:15.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-196" for this suite. Nov 22 22:49:21.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:49:21.969: INFO: namespace var-expansion-196 deletion completed in 6.094256221s • [SLOW TEST:10.261 seconds] [k8s.io] Variable Expansion /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:49:21.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Nov 22 22:49:22.023: INFO: Waiting up to 5m0s for pod "client-containers-4f9b76e0-fe85-494f-9ce6-7535cbefa2cd" in namespace "containers-6031" to be "success or failure" Nov 22 22:49:22.034: INFO: Pod "client-containers-4f9b76e0-fe85-494f-9ce6-7535cbefa2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.764825ms Nov 22 22:49:24.038: INFO: Pod "client-containers-4f9b76e0-fe85-494f-9ce6-7535cbefa2cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01462189s Nov 22 22:49:26.041: INFO: Pod "client-containers-4f9b76e0-fe85-494f-9ce6-7535cbefa2cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017904722s STEP: Saw pod success Nov 22 22:49:26.041: INFO: Pod "client-containers-4f9b76e0-fe85-494f-9ce6-7535cbefa2cd" satisfied condition "success or failure" Nov 22 22:49:26.043: INFO: Trying to get logs from node iruya-worker pod client-containers-4f9b76e0-fe85-494f-9ce6-7535cbefa2cd container test-container: STEP: delete the pod Nov 22 22:49:26.069: INFO: Waiting for pod client-containers-4f9b76e0-fe85-494f-9ce6-7535cbefa2cd to disappear Nov 22 22:49:26.073: INFO: Pod client-containers-4f9b76e0-fe85-494f-9ce6-7535cbefa2cd no longer exists [AfterEach] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:49:26.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6031" for this suite. Nov 22 22:49:32.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:49:32.181: INFO: namespace containers-6031 deletion completed in 6.104547s • [SLOW TEST:10.211 seconds] [k8s.io] Docker Containers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:49:32.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-20f387c3-d198-4cbc-a8f1-e766172c5744 STEP: Creating a pod to test consume configMaps Nov 22 22:49:32.244: INFO: Waiting up to 5m0s for pod "pod-configmaps-924e90f0-319d-4bf6-890c-74c4fd1438ac" in namespace "configmap-2797" to be "success or failure" Nov 22 22:49:32.247: INFO: Pod "pod-configmaps-924e90f0-319d-4bf6-890c-74c4fd1438ac": Phase="Pending", Reason="", readiness=false. Elapsed: 3.202163ms Nov 22 22:49:34.250: INFO: Pod "pod-configmaps-924e90f0-319d-4bf6-890c-74c4fd1438ac": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006581222s Nov 22 22:49:36.254: INFO: Pod "pod-configmaps-924e90f0-319d-4bf6-890c-74c4fd1438ac": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010376043s STEP: Saw pod success Nov 22 22:49:36.254: INFO: Pod "pod-configmaps-924e90f0-319d-4bf6-890c-74c4fd1438ac" satisfied condition "success or failure" Nov 22 22:49:36.258: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-924e90f0-319d-4bf6-890c-74c4fd1438ac container configmap-volume-test: STEP: delete the pod Nov 22 22:49:36.308: INFO: Waiting for pod pod-configmaps-924e90f0-319d-4bf6-890c-74c4fd1438ac to disappear Nov 22 22:49:36.319: INFO: Pod pod-configmaps-924e90f0-319d-4bf6-890c-74c4fd1438ac no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:49:36.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2797" for this suite. Nov 22 22:49:42.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:49:42.408: INFO: namespace configmap-2797 deletion completed in 6.084887223s • [SLOW TEST:10.226 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:49:42.408: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 22 22:50:04.504: INFO: Container started at 2020-11-22 22:49:44 +0000 UTC, pod became ready at 2020-11-22 22:50:03 +0000 UTC [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:50:04.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5628" for this suite. Nov 22 22:50:26.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:50:26.597: INFO: namespace container-probe-5628 deletion completed in 22.088952336s • [SLOW TEST:44.189 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:50:26.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-c4dw STEP: Creating a pod to test atomic-volume-subpath Nov 22 22:50:26.873: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-c4dw" in namespace "subpath-3984" to be "success or failure" Nov 22 22:50:26.895: INFO: Pod "pod-subpath-test-configmap-c4dw": Phase="Pending", Reason="", readiness=false. Elapsed: 22.377291ms Nov 22 22:50:28.900: INFO: Pod "pod-subpath-test-configmap-c4dw": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026569529s Nov 22 22:50:30.904: INFO: Pod "pod-subpath-test-configmap-c4dw": Phase="Running", Reason="", readiness=true. Elapsed: 4.030836754s Nov 22 22:50:32.909: INFO: Pod "pod-subpath-test-configmap-c4dw": Phase="Running", Reason="", readiness=true. Elapsed: 6.03549934s Nov 22 22:50:34.913: INFO: Pod "pod-subpath-test-configmap-c4dw": Phase="Running", Reason="", readiness=true. Elapsed: 8.039730622s Nov 22 22:50:36.917: INFO: Pod "pod-subpath-test-configmap-c4dw": Phase="Running", Reason="", readiness=true. Elapsed: 10.043924311s Nov 22 22:50:38.921: INFO: Pod "pod-subpath-test-configmap-c4dw": Phase="Running", Reason="", readiness=true. Elapsed: 12.048259844s Nov 22 22:50:40.926: INFO: Pod "pod-subpath-test-configmap-c4dw": Phase="Running", Reason="", readiness=true. Elapsed: 14.052767836s Nov 22 22:50:42.930: INFO: Pod "pod-subpath-test-configmap-c4dw": Phase="Running", Reason="", readiness=true. Elapsed: 16.057181975s Nov 22 22:50:44.935: INFO: Pod "pod-subpath-test-configmap-c4dw": Phase="Running", Reason="", readiness=true. Elapsed: 18.061528952s Nov 22 22:50:46.939: INFO: Pod "pod-subpath-test-configmap-c4dw": Phase="Running", Reason="", readiness=true. Elapsed: 20.065981138s Nov 22 22:50:48.943: INFO: Pod "pod-subpath-test-configmap-c4dw": Phase="Running", Reason="", readiness=true. Elapsed: 22.070339243s Nov 22 22:50:50.948: INFO: Pod "pod-subpath-test-configmap-c4dw": Phase="Running", Reason="", readiness=true. Elapsed: 24.075097109s Nov 22 22:50:52.952: INFO: Pod "pod-subpath-test-configmap-c4dw": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.079243411s STEP: Saw pod success Nov 22 22:50:52.952: INFO: Pod "pod-subpath-test-configmap-c4dw" satisfied condition "success or failure" Nov 22 22:50:52.955: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-c4dw container test-container-subpath-configmap-c4dw: STEP: delete the pod Nov 22 22:50:53.009: INFO: Waiting for pod pod-subpath-test-configmap-c4dw to disappear Nov 22 22:50:53.011: INFO: Pod pod-subpath-test-configmap-c4dw no longer exists STEP: Deleting pod pod-subpath-test-configmap-c4dw Nov 22 22:50:53.011: INFO: Deleting pod "pod-subpath-test-configmap-c4dw" in namespace "subpath-3984" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:50:53.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3984" for this suite. Nov 22 22:50:59.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:50:59.139: INFO: namespace subpath-3984 deletion completed in 6.121939176s • [SLOW TEST:32.542 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:50:59.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2af82778-9970-4c77-a29d-e67506d9c373 STEP: Creating a pod to test consume secrets Nov 22 22:50:59.234: INFO: Waiting up to 5m0s for pod "pod-secrets-6e7a942c-ec17-4e5f-82aa-933ae8487199" in namespace "secrets-2793" to be "success or failure" Nov 22 22:50:59.238: INFO: Pod "pod-secrets-6e7a942c-ec17-4e5f-82aa-933ae8487199": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043381ms Nov 22 22:51:01.243: INFO: Pod "pod-secrets-6e7a942c-ec17-4e5f-82aa-933ae8487199": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008326652s Nov 22 22:51:03.247: INFO: Pod "pod-secrets-6e7a942c-ec17-4e5f-82aa-933ae8487199": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012406432s STEP: Saw pod success Nov 22 22:51:03.247: INFO: Pod "pod-secrets-6e7a942c-ec17-4e5f-82aa-933ae8487199" satisfied condition "success or failure" Nov 22 22:51:03.250: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-6e7a942c-ec17-4e5f-82aa-933ae8487199 container secret-volume-test: STEP: delete the pod Nov 22 22:51:03.315: INFO: Waiting for pod pod-secrets-6e7a942c-ec17-4e5f-82aa-933ae8487199 to disappear Nov 22 22:51:03.323: INFO: Pod pod-secrets-6e7a942c-ec17-4e5f-82aa-933ae8487199 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:51:03.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2793" for this suite. Nov 22 22:51:09.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:51:09.430: INFO: namespace secrets-2793 deletion completed in 6.100813539s • [SLOW TEST:10.289 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:51:09.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:52:09.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7663" for this suite. Nov 22 22:52:31.591: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:52:31.672: INFO: namespace container-probe-7663 deletion completed in 22.093505325s • [SLOW TEST:82.241 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:52:31.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-66217232-acee-4e0c-9bde-89c1f9f5e005 STEP: Creating a pod to test consume configMaps Nov 22 22:52:31.749: INFO: Waiting up to 5m0s for pod "pod-configmaps-20befee4-1cae-4af0-85ec-880767c323d7" in namespace "configmap-7144" to be "success or failure" Nov 22 22:52:31.765: INFO: Pod "pod-configmaps-20befee4-1cae-4af0-85ec-880767c323d7": Phase="Pending", Reason="", readiness=false. Elapsed: 15.394452ms Nov 22 22:52:33.950: INFO: Pod "pod-configmaps-20befee4-1cae-4af0-85ec-880767c323d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200305475s Nov 22 22:52:35.954: INFO: Pod "pod-configmaps-20befee4-1cae-4af0-85ec-880767c323d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.204424765s STEP: Saw pod success Nov 22 22:52:35.954: INFO: Pod "pod-configmaps-20befee4-1cae-4af0-85ec-880767c323d7" satisfied condition "success or failure" Nov 22 22:52:35.957: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-20befee4-1cae-4af0-85ec-880767c323d7 container configmap-volume-test: STEP: delete the pod Nov 22 22:52:36.265: INFO: Waiting for pod pod-configmaps-20befee4-1cae-4af0-85ec-880767c323d7 to disappear Nov 22 22:52:36.280: INFO: Pod pod-configmaps-20befee4-1cae-4af0-85ec-880767c323d7 no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:52:36.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7144" for this suite. Nov 22 22:52:42.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:52:42.370: INFO: namespace configmap-7144 deletion completed in 6.086710233s • [SLOW TEST:10.698 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:52:42.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Nov 22 22:52:42.410: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Nov 22 22:52:42.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6689' Nov 22 22:52:42.699: INFO: stderr: "" Nov 22 22:52:42.699: INFO: stdout: "service/redis-slave created\n" Nov 22 22:52:42.699: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Nov 22 22:52:42.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6689' Nov 22 22:52:43.029: INFO: stderr: "" Nov 22 22:52:43.029: INFO: stdout: "service/redis-master created\n" Nov 22 22:52:43.029: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Nov 22 22:52:43.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6689' Nov 22 22:52:43.316: INFO: stderr: "" Nov 22 22:52:43.316: INFO: stdout: "service/frontend created\n" Nov 22 22:52:43.317: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Nov 22 22:52:43.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6689' Nov 22 22:52:43.589: INFO: stderr: "" Nov 22 22:52:43.589: INFO: stdout: "deployment.apps/frontend created\n" Nov 22 22:52:43.589: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Nov 22 22:52:43.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6689' Nov 22 22:52:43.935: INFO: stderr: "" Nov 22 22:52:43.935: INFO: stdout: "deployment.apps/redis-master created\n" Nov 22 22:52:43.935: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Nov 22 22:52:43.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6689' Nov 22 22:52:44.252: INFO: stderr: "" Nov 22 22:52:44.252: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Nov 22 22:52:44.252: INFO: Waiting for all frontend pods to be Running. Nov 22 22:52:54.303: INFO: Waiting for frontend to serve content. Nov 22 22:52:54.320: INFO: Trying to add a new entry to the guestbook. Nov 22 22:52:54.351: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Nov 22 22:52:54.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6689' Nov 22 22:52:54.503: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 22 22:52:54.503: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Nov 22 22:52:54.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6689' Nov 22 22:52:54.662: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 22 22:52:54.662: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Nov 22 22:52:54.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6689' Nov 22 22:52:54.772: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 22 22:52:54.772: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Nov 22 22:52:54.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6689' Nov 22 22:52:54.905: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 22 22:52:54.905: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Nov 22 22:52:54.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6689' Nov 22 22:52:55.014: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 22 22:52:55.014: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Nov 22 22:52:55.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6689' Nov 22 22:52:55.142: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 22 22:52:55.142: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:52:55.142: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6689" for this suite. Nov 22 22:53:35.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:53:35.295: INFO: namespace kubectl-6689 deletion completed in 40.098796713s • [SLOW TEST:52.924 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:53:35.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W1122 22:54:05.906932 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 22 22:54:05.907: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:54:05.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-612" for this suite. Nov 22 22:54:13.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:54:14.031: INFO: namespace gc-612 deletion completed in 8.120766649s • [SLOW TEST:38.735 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:54:14.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-2f6a0b50-22ca-42f3-b593-14eab2d51b98 STEP: Creating a pod to test consume configMaps Nov 22 22:54:14.130: INFO: Waiting up to 5m0s for pod "pod-configmaps-6330c4d2-f2f2-45f3-b760-dfa4a34559df" in namespace "configmap-8222" to be "success or failure" Nov 22 22:54:14.134: INFO: Pod "pod-configmaps-6330c4d2-f2f2-45f3-b760-dfa4a34559df": Phase="Pending", Reason="", readiness=false. Elapsed: 3.83797ms Nov 22 22:54:16.138: INFO: Pod "pod-configmaps-6330c4d2-f2f2-45f3-b760-dfa4a34559df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007936632s Nov 22 22:54:18.142: INFO: Pod "pod-configmaps-6330c4d2-f2f2-45f3-b760-dfa4a34559df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012309697s STEP: Saw pod success Nov 22 22:54:18.142: INFO: Pod "pod-configmaps-6330c4d2-f2f2-45f3-b760-dfa4a34559df" satisfied condition "success or failure" Nov 22 22:54:18.145: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-6330c4d2-f2f2-45f3-b760-dfa4a34559df container configmap-volume-test: STEP: delete the pod Nov 22 22:54:18.189: INFO: Waiting for pod pod-configmaps-6330c4d2-f2f2-45f3-b760-dfa4a34559df to disappear Nov 22 22:54:18.199: INFO: Pod pod-configmaps-6330c4d2-f2f2-45f3-b760-dfa4a34559df no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:54:18.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8222" for this suite. Nov 22 22:54:24.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:54:24.305: INFO: namespace configmap-8222 deletion completed in 6.102702758s • [SLOW TEST:10.275 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:54:24.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 22 22:54:24.382: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a45bd063-0858-4c59-95dd-2af51a24ef1f" in namespace "downward-api-781" to be "success or failure" Nov 22 22:54:24.405: INFO: Pod "downwardapi-volume-a45bd063-0858-4c59-95dd-2af51a24ef1f": Phase="Pending", Reason="", readiness=false. Elapsed: 23.622529ms Nov 22 22:54:26.408: INFO: Pod "downwardapi-volume-a45bd063-0858-4c59-95dd-2af51a24ef1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026850768s Nov 22 22:54:28.413: INFO: Pod "downwardapi-volume-a45bd063-0858-4c59-95dd-2af51a24ef1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031125734s STEP: Saw pod success Nov 22 22:54:28.413: INFO: Pod "downwardapi-volume-a45bd063-0858-4c59-95dd-2af51a24ef1f" satisfied condition "success or failure" Nov 22 22:54:28.416: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a45bd063-0858-4c59-95dd-2af51a24ef1f container client-container: STEP: delete the pod Nov 22 22:54:28.438: INFO: Waiting for pod downwardapi-volume-a45bd063-0858-4c59-95dd-2af51a24ef1f to disappear Nov 22 22:54:28.443: INFO: Pod downwardapi-volume-a45bd063-0858-4c59-95dd-2af51a24ef1f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:54:28.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-781" for this suite. Nov 22 22:54:34.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:54:34.533: INFO: namespace downward-api-781 deletion completed in 6.086223506s • [SLOW TEST:10.227 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:54:34.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-2509 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 22 22:54:34.648: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Nov 22 22:54:58.784: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.242:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2509 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:54:58.784: INFO: >>> kubeConfig: /root/.kube/config I1122 22:54:58.817958 6 log.go:172] (0xc00186ef20) (0xc002751540) Create stream I1122 22:54:58.817985 6 log.go:172] (0xc00186ef20) (0xc002751540) Stream added, broadcasting: 1 I1122 22:54:58.819943 6 log.go:172] (0xc00186ef20) Reply frame received for 1 I1122 22:54:58.819999 6 log.go:172] (0xc00186ef20) (0xc001c84640) Create stream I1122 22:54:58.820015 6 log.go:172] (0xc00186ef20) (0xc001c84640) Stream added, broadcasting: 3 I1122 22:54:58.821284 6 log.go:172] (0xc00186ef20) Reply frame received for 3 I1122 22:54:58.821332 6 log.go:172] (0xc00186ef20) (0xc001c84780) Create stream I1122 22:54:58.821347 6 log.go:172] (0xc00186ef20) (0xc001c84780) Stream added, broadcasting: 5 I1122 22:54:58.822279 6 log.go:172] (0xc00186ef20) Reply frame received for 5 I1122 22:54:58.913155 6 log.go:172] (0xc00186ef20) Data frame received for 3 I1122 22:54:58.913186 6 log.go:172] (0xc001c84640) (3) Data frame handling I1122 22:54:58.913205 6 log.go:172] (0xc001c84640) (3) Data frame sent I1122 22:54:58.913333 6 log.go:172] (0xc00186ef20) Data frame received for 3 I1122 22:54:58.913381 6 log.go:172] (0xc001c84640) (3) Data frame handling I1122 22:54:58.913402 6 log.go:172] (0xc00186ef20) Data frame received for 5 I1122 22:54:58.913419 6 log.go:172] (0xc001c84780) (5) Data frame handling I1122 22:54:58.915103 6 log.go:172] (0xc00186ef20) Data frame received for 1 I1122 22:54:58.915118 6 log.go:172] (0xc002751540) (1) Data frame handling I1122 22:54:58.915138 6 log.go:172] (0xc002751540) (1) Data frame sent I1122 22:54:58.915153 6 log.go:172] (0xc00186ef20) (0xc002751540) Stream removed, broadcasting: 1 I1122 22:54:58.915228 6 log.go:172] (0xc00186ef20) (0xc002751540) Stream removed, broadcasting: 1 I1122 22:54:58.915237 6 log.go:172] (0xc00186ef20) (0xc001c84640) Stream removed, broadcasting: 3 I1122 22:54:58.915335 6 log.go:172] (0xc00186ef20) Go away received I1122 22:54:58.915389 6 log.go:172] (0xc00186ef20) (0xc001c84780) Stream removed, broadcasting: 5 Nov 22 22:54:58.915: INFO: Found all expected endpoints: [netserver-0] Nov 22 22:54:58.918: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.207:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-2509 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:54:58.918: INFO: >>> kubeConfig: /root/.kube/config I1122 22:54:58.947046 6 log.go:172] (0xc002038790) (0xc001c84aa0) Create stream I1122 22:54:58.947080 6 log.go:172] (0xc002038790) (0xc001c84aa0) Stream added, broadcasting: 1 I1122 22:54:58.949108 6 log.go:172] (0xc002038790) Reply frame received for 1 I1122 22:54:58.949156 6 log.go:172] (0xc002038790) (0xc00209a460) Create stream I1122 22:54:58.949171 6 log.go:172] (0xc002038790) (0xc00209a460) Stream added, broadcasting: 3 I1122 22:54:58.950428 6 log.go:172] (0xc002038790) Reply frame received for 3 I1122 22:54:58.950491 6 log.go:172] (0xc002038790) (0xc00209a500) Create stream I1122 22:54:58.950516 6 log.go:172] (0xc002038790) (0xc00209a500) Stream added, broadcasting: 5 I1122 22:54:58.951711 6 log.go:172] (0xc002038790) Reply frame received for 5 I1122 22:54:59.032387 6 log.go:172] (0xc002038790) Data frame received for 5 I1122 22:54:59.032427 6 log.go:172] (0xc00209a500) (5) Data frame handling I1122 22:54:59.032454 6 log.go:172] (0xc002038790) Data frame received for 3 I1122 22:54:59.032469 6 log.go:172] (0xc00209a460) (3) Data frame handling I1122 22:54:59.032489 6 log.go:172] (0xc00209a460) (3) Data frame sent I1122 22:54:59.032500 6 log.go:172] (0xc002038790) Data frame received for 3 I1122 22:54:59.032509 6 log.go:172] (0xc00209a460) (3) Data frame handling I1122 22:54:59.034188 6 log.go:172] (0xc002038790) Data frame received for 1 I1122 22:54:59.034221 6 log.go:172] (0xc001c84aa0) (1) Data frame handling I1122 22:54:59.034258 6 log.go:172] (0xc001c84aa0) (1) Data frame sent I1122 22:54:59.034279 6 log.go:172] (0xc002038790) (0xc001c84aa0) Stream removed, broadcasting: 1 I1122 22:54:59.034392 6 log.go:172] (0xc002038790) (0xc001c84aa0) Stream removed, broadcasting: 1 I1122 22:54:59.034415 6 log.go:172] (0xc002038790) (0xc00209a460) Stream removed, broadcasting: 3 I1122 22:54:59.034445 6 log.go:172] (0xc002038790) (0xc00209a500) Stream removed, broadcasting: 5 Nov 22 22:54:59.034: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 I1122 22:54:59.034519 6 log.go:172] (0xc002038790) Go away received Nov 22 22:54:59.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-2509" for this suite. Nov 22 22:55:21.061: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:55:21.193: INFO: namespace pod-network-test-2509 deletion completed in 22.153080521s • [SLOW TEST:46.660 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:55:21.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Nov 22 22:55:25.787: INFO: Successfully updated pod "annotationupdatea73c9984-8493-43e0-8bb8-5af3e4d807ae" [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:55:29.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9251" for this suite. Nov 22 22:55:51.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:55:51.937: INFO: namespace downward-api-9251 deletion completed in 22.092748937s • [SLOW TEST:30.743 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:55:51.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Nov 22 22:55:52.006: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:55:52.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5511" for this suite. Nov 22 22:55:58.154: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:55:58.228: INFO: namespace kubectl-5511 deletion completed in 6.1161082s • [SLOW TEST:6.290 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:55:58.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Nov 22 22:55:58.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2946' Nov 22 22:56:01.368: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Nov 22 22:56:01.368: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Nov 22 22:56:01.378: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Nov 22 22:56:01.441: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Nov 22 22:56:01.457: INFO: scanned /root for discovery docs: Nov 22 22:56:01.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2946' Nov 22 22:56:17.287: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Nov 22 22:56:17.287: INFO: stdout: "Created e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2\nScaling up e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Nov 22 22:56:17.287: INFO: stdout: "Created e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2\nScaling up e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Nov 22 22:56:17.287: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:56:17.396: INFO: stderr: "" Nov 22 22:56:17.397: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 e2e-test-nginx-rc-q84dr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Nov 22 22:56:22.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:56:22.496: INFO: stderr: "" Nov 22 22:56:22.496: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 e2e-test-nginx-rc-q84dr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Nov 22 22:56:27.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:56:27.600: INFO: stderr: "" Nov 22 22:56:27.600: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 e2e-test-nginx-rc-q84dr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Nov 22 22:56:32.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:56:32.703: INFO: stderr: "" Nov 22 22:56:32.703: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 e2e-test-nginx-rc-q84dr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Nov 22 22:56:37.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:56:37.799: INFO: stderr: "" Nov 22 22:56:37.799: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 e2e-test-nginx-rc-q84dr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Nov 22 22:56:42.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:56:42.909: INFO: stderr: "" Nov 22 22:56:42.909: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 e2e-test-nginx-rc-q84dr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Nov 22 22:56:47.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:56:48.009: INFO: stderr: "" Nov 22 22:56:48.009: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 e2e-test-nginx-rc-q84dr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Nov 22 22:56:53.009: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:56:53.125: INFO: stderr: "" Nov 22 22:56:53.125: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 e2e-test-nginx-rc-q84dr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Nov 22 22:56:58.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:56:58.226: INFO: stderr: "" Nov 22 22:56:58.226: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 e2e-test-nginx-rc-q84dr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Nov 22 22:57:03.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:57:03.326: INFO: stderr: "" Nov 22 22:57:03.326: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 e2e-test-nginx-rc-q84dr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Nov 22 22:57:08.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:57:08.440: INFO: stderr: "" Nov 22 22:57:08.440: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 e2e-test-nginx-rc-q84dr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Nov 22 22:57:13.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:57:13.543: INFO: stderr: "" Nov 22 22:57:13.543: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 e2e-test-nginx-rc-q84dr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Nov 22 22:57:18.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:57:18.643: INFO: stderr: "" Nov 22 22:57:18.643: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 e2e-test-nginx-rc-q84dr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Nov 22 22:57:23.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:57:23.751: INFO: stderr: "" Nov 22 22:57:23.751: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 e2e-test-nginx-rc-q84dr " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Nov 22 22:57:28.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:57:28.849: INFO: stderr: "" Nov 22 22:57:28.849: INFO: stdout: "e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 " Nov 22 22:57:28.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2946' Nov 22 22:57:28.943: INFO: stderr: "" Nov 22 22:57:28.943: INFO: stdout: "true" Nov 22 22:57:28.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2946' Nov 22 22:57:29.032: INFO: stderr: "" Nov 22 22:57:29.033: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Nov 22 22:57:29.033: INFO: e2e-test-nginx-rc-014edf44e632e515cbea20bcd5efeea2-twdt7 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Nov 22 22:57:29.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2946' Nov 22 22:57:29.154: INFO: stderr: "" Nov 22 22:57:29.154: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:57:29.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2946" for this suite. Nov 22 22:57:35.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:57:35.331: INFO: namespace kubectl-2946 deletion completed in 6.091393638s • [SLOW TEST:97.102 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:57:35.331: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Nov 22 22:57:35.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9929' Nov 22 22:57:35.509: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Nov 22 22:57:35.509: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Nov 22 22:57:35.520: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-p7c7n] Nov 22 22:57:35.520: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-p7c7n" in namespace "kubectl-9929" to be "running and ready" Nov 22 22:57:35.550: INFO: Pod "e2e-test-nginx-rc-p7c7n": Phase="Pending", Reason="", readiness=false. Elapsed: 29.250997ms Nov 22 22:57:37.553: INFO: Pod "e2e-test-nginx-rc-p7c7n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032907027s Nov 22 22:57:39.557: INFO: Pod "e2e-test-nginx-rc-p7c7n": Phase="Running", Reason="", readiness=true. Elapsed: 4.036672084s Nov 22 22:57:39.557: INFO: Pod "e2e-test-nginx-rc-p7c7n" satisfied condition "running and ready" Nov 22 22:57:39.557: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-p7c7n] Nov 22 22:57:39.557: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-9929' Nov 22 22:57:39.683: INFO: stderr: "" Nov 22 22:57:39.683: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Nov 22 22:57:39.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9929' Nov 22 22:57:39.783: INFO: stderr: "" Nov 22 22:57:39.783: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:57:39.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9929" for this suite. Nov 22 22:57:46.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:57:46.730: INFO: namespace kubectl-9929 deletion completed in 6.944097052s • [SLOW TEST:11.399 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:57:46.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 22 22:57:46.846: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7160bb6f-3603-44ce-9cbd-23fe2ac50bff" in namespace "projected-6145" to be "success or failure" Nov 22 22:57:46.875: INFO: Pod "downwardapi-volume-7160bb6f-3603-44ce-9cbd-23fe2ac50bff": Phase="Pending", Reason="", readiness=false. Elapsed: 28.924593ms Nov 22 22:57:48.910: INFO: Pod "downwardapi-volume-7160bb6f-3603-44ce-9cbd-23fe2ac50bff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.063981524s Nov 22 22:57:50.945: INFO: Pod "downwardapi-volume-7160bb6f-3603-44ce-9cbd-23fe2ac50bff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099835744s STEP: Saw pod success Nov 22 22:57:50.946: INFO: Pod "downwardapi-volume-7160bb6f-3603-44ce-9cbd-23fe2ac50bff" satisfied condition "success or failure" Nov 22 22:57:50.948: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-7160bb6f-3603-44ce-9cbd-23fe2ac50bff container client-container: STEP: delete the pod Nov 22 22:57:50.983: INFO: Waiting for pod downwardapi-volume-7160bb6f-3603-44ce-9cbd-23fe2ac50bff to disappear Nov 22 22:57:50.998: INFO: Pod downwardapi-volume-7160bb6f-3603-44ce-9cbd-23fe2ac50bff no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:57:50.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6145" for this suite. Nov 22 22:57:57.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:57:57.089: INFO: namespace projected-6145 deletion completed in 6.087309476s • [SLOW TEST:10.359 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:57:57.089: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Nov 22 22:57:57.131: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 22 22:57:57.166: INFO: Waiting for terminating namespaces to be deleted... Nov 22 22:57:57.168: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Nov 22 22:57:57.173: INFO: kindnet-7bsvw from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 22 22:57:57.174: INFO: Container kindnet-cni ready: true, restart count 0 Nov 22 22:57:57.174: INFO: kube-proxy-mtljr from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 22 22:57:57.174: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 22:57:57.174: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Nov 22 22:57:57.178: INFO: kindnet-djqgh from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 22 22:57:57.178: INFO: Container kindnet-cni ready: true, restart count 0 Nov 22 22:57:57.178: INFO: kube-proxy-52wt5 from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 22 22:57:57.178: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1649f6988d701ad8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:57:58.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9469" for this suite. Nov 22 22:58:04.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:58:04.308: INFO: namespace sched-pred-9469 deletion completed in 6.096465156s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.219 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:58:04.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Nov 22 22:58:04.885: INFO: created pod pod-service-account-defaultsa Nov 22 22:58:04.885: INFO: pod pod-service-account-defaultsa service account token volume mount: true Nov 22 22:58:04.915: INFO: created pod pod-service-account-mountsa Nov 22 22:58:04.915: INFO: pod pod-service-account-mountsa service account token volume mount: true Nov 22 22:58:04.946: INFO: created pod pod-service-account-nomountsa Nov 22 22:58:04.946: INFO: pod pod-service-account-nomountsa service account token volume mount: false Nov 22 22:58:04.970: INFO: created pod pod-service-account-defaultsa-mountspec Nov 22 22:58:04.970: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Nov 22 22:58:05.005: INFO: created pod pod-service-account-mountsa-mountspec Nov 22 22:58:05.005: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Nov 22 22:58:06.952: INFO: created pod pod-service-account-nomountsa-mountspec Nov 22 22:58:06.952: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Nov 22 22:58:06.989: INFO: created pod pod-service-account-defaultsa-nomountspec Nov 22 22:58:06.989: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Nov 22 22:58:07.204: INFO: created pod pod-service-account-mountsa-nomountspec Nov 22 22:58:07.204: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Nov 22 22:58:07.288: INFO: created pod pod-service-account-nomountsa-nomountspec Nov 22 22:58:07.288: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:58:07.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1201" for this suite. Nov 22 22:58:35.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:58:35.782: INFO: namespace svcaccounts-1201 deletion completed in 28.16572122s • [SLOW TEST:31.473 seconds] [sig-auth] ServiceAccounts /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:58:35.782: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 22 22:58:35.888: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"80a1430e-abeb-4da3-a3ee-241cc1b18ff7", Controller:(*bool)(0xc000b365da), BlockOwnerDeletion:(*bool)(0xc000b365db)}} Nov 22 22:58:35.893: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"b98d3f45-e93b-4274-9326-c006cad1eb32", Controller:(*bool)(0xc00052198a), BlockOwnerDeletion:(*bool)(0xc00052198b)}} Nov 22 22:58:35.898: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"4ab5163c-0570-4991-8218-4b2314d3b1ff", Controller:(*bool)(0xc000b3676a), BlockOwnerDeletion:(*bool)(0xc000b3676b)}} [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:58:40.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7820" for this suite. Nov 22 22:58:47.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:58:47.086: INFO: namespace gc-7820 deletion completed in 6.136342275s • [SLOW TEST:11.303 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:58:47.086: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:58:47.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2322" for this suite. Nov 22 22:58:55.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:58:55.317: INFO: namespace kubelet-test-2322 deletion completed in 8.083904157s • [SLOW TEST:8.231 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:58:55.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8957 STEP: creating a selector STEP: Creating the service pods in kubernetes Nov 22 22:58:55.378: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Nov 22 22:59:19.503: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.252:8080/dial?request=hostName&protocol=udp&host=10.244.2.251&port=8081&tries=1'] Namespace:pod-network-test-8957 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:59:19.503: INFO: >>> kubeConfig: /root/.kube/config I1122 22:59:19.531289 6 log.go:172] (0xc002106f20) (0xc0025d8780) Create stream I1122 22:59:19.531338 6 log.go:172] (0xc002106f20) (0xc0025d8780) Stream added, broadcasting: 1 I1122 22:59:19.533864 6 log.go:172] (0xc002106f20) Reply frame received for 1 I1122 22:59:19.533898 6 log.go:172] (0xc002106f20) (0xc00211a000) Create stream I1122 22:59:19.533910 6 log.go:172] (0xc002106f20) (0xc00211a000) Stream added, broadcasting: 3 I1122 22:59:19.534835 6 log.go:172] (0xc002106f20) Reply frame received for 3 I1122 22:59:19.534862 6 log.go:172] (0xc002106f20) (0xc0025d8820) Create stream I1122 22:59:19.534874 6 log.go:172] (0xc002106f20) (0xc0025d8820) Stream added, broadcasting: 5 I1122 22:59:19.535655 6 log.go:172] (0xc002106f20) Reply frame received for 5 I1122 22:59:19.615471 6 log.go:172] (0xc002106f20) Data frame received for 3 I1122 22:59:19.615504 6 log.go:172] (0xc00211a000) (3) Data frame handling I1122 22:59:19.615530 6 log.go:172] (0xc00211a000) (3) Data frame sent I1122 22:59:19.616271 6 log.go:172] (0xc002106f20) Data frame received for 5 I1122 22:59:19.616311 6 log.go:172] (0xc0025d8820) (5) Data frame handling I1122 22:59:19.616335 6 log.go:172] (0xc002106f20) Data frame received for 3 I1122 22:59:19.616346 6 log.go:172] (0xc00211a000) (3) Data frame handling I1122 22:59:19.618196 6 log.go:172] (0xc002106f20) Data frame received for 1 I1122 22:59:19.618227 6 log.go:172] (0xc0025d8780) (1) Data frame handling I1122 22:59:19.618245 6 log.go:172] (0xc0025d8780) (1) Data frame sent I1122 22:59:19.618262 6 log.go:172] (0xc002106f20) (0xc0025d8780) Stream removed, broadcasting: 1 I1122 22:59:19.618280 6 log.go:172] (0xc002106f20) Go away received I1122 22:59:19.618387 6 log.go:172] (0xc002106f20) (0xc0025d8780) Stream removed, broadcasting: 1 I1122 22:59:19.618431 6 log.go:172] (0xc002106f20) (0xc00211a000) Stream removed, broadcasting: 3 I1122 22:59:19.618454 6 log.go:172] (0xc002106f20) (0xc0025d8820) Stream removed, broadcasting: 5 Nov 22 22:59:19.618: INFO: Waiting for endpoints: map[] Nov 22 22:59:19.621: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.252:8080/dial?request=hostName&protocol=udp&host=10.244.1.218&port=8081&tries=1'] Namespace:pod-network-test-8957 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Nov 22 22:59:19.622: INFO: >>> kubeConfig: /root/.kube/config I1122 22:59:19.657674 6 log.go:172] (0xc000b41ad0) (0xc001dc8140) Create stream I1122 22:59:19.657702 6 log.go:172] (0xc000b41ad0) (0xc001dc8140) Stream added, broadcasting: 1 I1122 22:59:19.666687 6 log.go:172] (0xc000b41ad0) Reply frame received for 1 I1122 22:59:19.666767 6 log.go:172] (0xc000b41ad0) (0xc00211a140) Create stream I1122 22:59:19.666791 6 log.go:172] (0xc000b41ad0) (0xc00211a140) Stream added, broadcasting: 3 I1122 22:59:19.668146 6 log.go:172] (0xc000b41ad0) Reply frame received for 3 I1122 22:59:19.668190 6 log.go:172] (0xc000b41ad0) (0xc0025d88c0) Create stream I1122 22:59:19.668215 6 log.go:172] (0xc000b41ad0) (0xc0025d88c0) Stream added, broadcasting: 5 I1122 22:59:19.669610 6 log.go:172] (0xc000b41ad0) Reply frame received for 5 I1122 22:59:19.751831 6 log.go:172] (0xc000b41ad0) Data frame received for 3 I1122 22:59:19.751855 6 log.go:172] (0xc00211a140) (3) Data frame handling I1122 22:59:19.751863 6 log.go:172] (0xc00211a140) (3) Data frame sent I1122 22:59:19.752053 6 log.go:172] (0xc000b41ad0) Data frame received for 3 I1122 22:59:19.752064 6 log.go:172] (0xc00211a140) (3) Data frame handling I1122 22:59:19.752271 6 log.go:172] (0xc000b41ad0) Data frame received for 5 I1122 22:59:19.752297 6 log.go:172] (0xc0025d88c0) (5) Data frame handling I1122 22:59:19.753994 6 log.go:172] (0xc000b41ad0) Data frame received for 1 I1122 22:59:19.754007 6 log.go:172] (0xc001dc8140) (1) Data frame handling I1122 22:59:19.754015 6 log.go:172] (0xc001dc8140) (1) Data frame sent I1122 22:59:19.754022 6 log.go:172] (0xc000b41ad0) (0xc001dc8140) Stream removed, broadcasting: 1 I1122 22:59:19.754099 6 log.go:172] (0xc000b41ad0) Go away received I1122 22:59:19.754160 6 log.go:172] (0xc000b41ad0) (0xc001dc8140) Stream removed, broadcasting: 1 I1122 22:59:19.754218 6 log.go:172] (0xc000b41ad0) (0xc00211a140) Stream removed, broadcasting: 3 I1122 22:59:19.754238 6 log.go:172] (0xc000b41ad0) (0xc0025d88c0) Stream removed, broadcasting: 5 Nov 22 22:59:19.754: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:59:19.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8957" for this suite. Nov 22 22:59:43.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 22:59:43.876: INFO: namespace pod-network-test-8957 deletion completed in 24.112535488s • [SLOW TEST:48.559 seconds] [sig-network] Networking /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 22:59:43.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Nov 22 22:59:43.954: INFO: namespace kubectl-384 Nov 22 22:59:43.954: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-384' Nov 22 22:59:44.206: INFO: stderr: "" Nov 22 22:59:44.206: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Nov 22 22:59:45.211: INFO: Selector matched 1 pods for map[app:redis] Nov 22 22:59:45.211: INFO: Found 0 / 1 Nov 22 22:59:46.211: INFO: Selector matched 1 pods for map[app:redis] Nov 22 22:59:46.211: INFO: Found 0 / 1 Nov 22 22:59:47.211: INFO: Selector matched 1 pods for map[app:redis] Nov 22 22:59:47.211: INFO: Found 0 / 1 Nov 22 22:59:48.211: INFO: Selector matched 1 pods for map[app:redis] Nov 22 22:59:48.211: INFO: Found 1 / 1 Nov 22 22:59:48.211: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Nov 22 22:59:48.214: INFO: Selector matched 1 pods for map[app:redis] Nov 22 22:59:48.214: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 22 22:59:48.214: INFO: wait on redis-master startup in kubectl-384 Nov 22 22:59:48.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-6jsw5 redis-master --namespace=kubectl-384' Nov 22 22:59:48.322: INFO: stderr: "" Nov 22 22:59:48.323: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 22 Nov 22:59:46.714 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Nov 22:59:46.714 # Server started, Redis version 3.2.12\n1:M 22 Nov 22:59:46.714 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Nov 22:59:46.714 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Nov 22 22:59:48.323: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-384' Nov 22 22:59:48.451: INFO: stderr: "" Nov 22 22:59:48.451: INFO: stdout: "service/rm2 exposed\n" Nov 22 22:59:48.456: INFO: Service rm2 in namespace kubectl-384 found. STEP: exposing service Nov 22 22:59:50.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-384' Nov 22 22:59:50.598: INFO: stderr: "" Nov 22 22:59:50.598: INFO: stdout: "service/rm3 exposed\n" Nov 22 22:59:50.641: INFO: Service rm3 in namespace kubectl-384 found. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 22:59:52.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-384" for this suite. Nov 22 23:00:14.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:00:14.787: INFO: namespace kubectl-384 deletion completed in 22.116333642s • [SLOW TEST:30.910 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:00:14.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-8631/secret-test-fc0cce14-5c0b-4468-9b72-8321a330231f STEP: Creating a pod to test consume secrets Nov 22 23:00:14.878: INFO: Waiting up to 5m0s for pod "pod-configmaps-11a0a348-0096-4d5b-be80-190010494915" in namespace "secrets-8631" to be "success or failure" Nov 22 23:00:14.893: INFO: Pod "pod-configmaps-11a0a348-0096-4d5b-be80-190010494915": Phase="Pending", Reason="", readiness=false. Elapsed: 14.913539ms Nov 22 23:00:16.897: INFO: Pod "pod-configmaps-11a0a348-0096-4d5b-be80-190010494915": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018998812s Nov 22 23:00:18.902: INFO: Pod "pod-configmaps-11a0a348-0096-4d5b-be80-190010494915": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023244438s STEP: Saw pod success Nov 22 23:00:18.902: INFO: Pod "pod-configmaps-11a0a348-0096-4d5b-be80-190010494915" satisfied condition "success or failure" Nov 22 23:00:18.904: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-11a0a348-0096-4d5b-be80-190010494915 container env-test: STEP: delete the pod Nov 22 23:00:18.966: INFO: Waiting for pod pod-configmaps-11a0a348-0096-4d5b-be80-190010494915 to disappear Nov 22 23:00:18.972: INFO: Pod pod-configmaps-11a0a348-0096-4d5b-be80-190010494915 no longer exists [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:00:18.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8631" for this suite. Nov 22 23:00:24.994: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:00:25.071: INFO: namespace secrets-8631 deletion completed in 6.094688626s • [SLOW TEST:10.283 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:00:25.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 22 23:00:29.336: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:00:29.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4452" for this suite. Nov 22 23:00:35.389: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:00:35.463: INFO: namespace container-runtime-4452 deletion completed in 6.109752495s • [SLOW TEST:10.391 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:00:35.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Nov 22 23:00:35.519: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:00:45.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4956" for this suite. Nov 22 23:00:51.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:00:51.737: INFO: namespace pods-4956 deletion completed in 6.081267001s • [SLOW TEST:16.272 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:00:51.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-4d391933-92a5-42ed-8c65-760f3b250a50 [AfterEach] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:00:51.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4515" for this suite. Nov 22 23:00:57.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:00:57.946: INFO: namespace secrets-4515 deletion completed in 6.084344011s • [SLOW TEST:6.210 seconds] [sig-api-machinery] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:00:57.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 22 23:00:57.994: INFO: Creating deployment "test-recreate-deployment" Nov 22 23:00:58.004: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Nov 22 23:00:58.028: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Nov 22 23:01:00.035: INFO: Waiting deployment "test-recreate-deployment" to complete Nov 22 23:01:00.038: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741682858, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741682858, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741682858, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741682858, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Nov 22 23:01:02.042: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Nov 22 23:01:02.051: INFO: Updating deployment test-recreate-deployment Nov 22 23:01:02.051: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Nov 22 23:01:02.417: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-910,SelfLink:/apis/apps/v1/namespaces/deployment-910/deployments/test-recreate-deployment,UID:1961af16-7bd7-4a95-8063-2b98d76bdb97,ResourceVersion:10983453,Generation:2,CreationTimestamp:2020-11-22 23:00:57 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-11-22 23:01:02 +0000 UTC 2020-11-22 23:01:02 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-11-22 23:01:02 +0000 UTC 2020-11-22 23:00:58 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Nov 22 23:01:02.423: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-910,SelfLink:/apis/apps/v1/namespaces/deployment-910/replicasets/test-recreate-deployment-5c8c9cc69d,UID:8fb84e2f-f57e-4652-9eca-c1a0f31f4979,ResourceVersion:10983450,Generation:1,CreationTimestamp:2020-11-22 23:01:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 1961af16-7bd7-4a95-8063-2b98d76bdb97 0xc000b58707 0xc000b58708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Nov 22 23:01:02.423: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Nov 22 23:01:02.423: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-910,SelfLink:/apis/apps/v1/namespaces/deployment-910/replicasets/test-recreate-deployment-6df85df6b9,UID:12c3a103-d553-4357-bec3-640b291d23a9,ResourceVersion:10983441,Generation:2,CreationTimestamp:2020-11-22 23:00:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 1961af16-7bd7-4a95-8063-2b98d76bdb97 0xc000b587e7 0xc000b587e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Nov 22 23:01:02.426: INFO: Pod "test-recreate-deployment-5c8c9cc69d-rvsqn" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-rvsqn,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-910,SelfLink:/api/v1/namespaces/deployment-910/pods/test-recreate-deployment-5c8c9cc69d-rvsqn,UID:ebfe2864-0a5f-467d-aa9d-8b4b7c5cd533,ResourceVersion:10983454,Generation:0,CreationTimestamp:2020-11-22 23:01:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 8fb84e2f-f57e-4652-9eca-c1a0f31f4979 0xc000b59cd7 0xc000b59cd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bjsw7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bjsw7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bjsw7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000b59d70} {node.kubernetes.io/unreachable Exists NoExecute 0xc000b59da0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:02 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:02 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:02 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-22 23:01:02 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:01:02.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-910" for this suite. Nov 22 23:01:10.444: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:01:10.558: INFO: namespace deployment-910 deletion completed in 8.128133564s • [SLOW TEST:12.612 seconds] [sig-apps] Deployment /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:01:10.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-c4eeaf41-5cc1-4b69-bf52-a520e8b96c06 STEP: Creating a pod to test consume configMaps Nov 22 23:01:10.655: INFO: Waiting up to 5m0s for pod "pod-configmaps-e9ae208e-130b-4bdf-b062-f64cedffebab" in namespace "configmap-7987" to be "success or failure" Nov 22 23:01:10.662: INFO: Pod "pod-configmaps-e9ae208e-130b-4bdf-b062-f64cedffebab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.972462ms Nov 22 23:01:12.674: INFO: Pod "pod-configmaps-e9ae208e-130b-4bdf-b062-f64cedffebab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019018292s Nov 22 23:01:14.678: INFO: Pod "pod-configmaps-e9ae208e-130b-4bdf-b062-f64cedffebab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023049581s STEP: Saw pod success Nov 22 23:01:14.678: INFO: Pod "pod-configmaps-e9ae208e-130b-4bdf-b062-f64cedffebab" satisfied condition "success or failure" Nov 22 23:01:14.682: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-e9ae208e-130b-4bdf-b062-f64cedffebab container configmap-volume-test: STEP: delete the pod Nov 22 23:01:14.705: INFO: Waiting for pod pod-configmaps-e9ae208e-130b-4bdf-b062-f64cedffebab to disappear Nov 22 23:01:14.722: INFO: Pod pod-configmaps-e9ae208e-130b-4bdf-b062-f64cedffebab no longer exists [AfterEach] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:01:14.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7987" for this suite. Nov 22 23:01:20.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:01:20.812: INFO: namespace configmap-7987 deletion completed in 6.085984946s • [SLOW TEST:10.253 seconds] [sig-storage] ConfigMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:01:20.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Nov 22 23:01:20.906: INFO: Waiting up to 5m0s for pod "pod-cd02a6be-3727-44f0-8a79-a8438fa71cb9" in namespace "emptydir-2482" to be "success or failure" Nov 22 23:01:20.914: INFO: Pod "pod-cd02a6be-3727-44f0-8a79-a8438fa71cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.470925ms Nov 22 23:01:22.926: INFO: Pod "pod-cd02a6be-3727-44f0-8a79-a8438fa71cb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019520758s Nov 22 23:01:24.930: INFO: Pod "pod-cd02a6be-3727-44f0-8a79-a8438fa71cb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023677718s STEP: Saw pod success Nov 22 23:01:24.930: INFO: Pod "pod-cd02a6be-3727-44f0-8a79-a8438fa71cb9" satisfied condition "success or failure" Nov 22 23:01:24.933: INFO: Trying to get logs from node iruya-worker pod pod-cd02a6be-3727-44f0-8a79-a8438fa71cb9 container test-container: STEP: delete the pod Nov 22 23:01:25.008: INFO: Waiting for pod pod-cd02a6be-3727-44f0-8a79-a8438fa71cb9 to disappear Nov 22 23:01:25.052: INFO: Pod pod-cd02a6be-3727-44f0-8a79-a8438fa71cb9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:01:25.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2482" for this suite. Nov 22 23:01:33.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:01:33.150: INFO: namespace emptydir-2482 deletion completed in 8.095178465s • [SLOW TEST:12.338 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:01:33.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3923 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-3923 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3923 Nov 22 23:01:33.249: INFO: Found 0 stateful pods, waiting for 1 Nov 22 23:01:43.254: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Nov 22 23:01:43.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Nov 22 23:01:43.543: INFO: stderr: "I1122 23:01:43.385047 1624 log.go:172] (0xc000a46420) (0xc0003b86e0) Create stream\nI1122 23:01:43.385113 1624 log.go:172] (0xc000a46420) (0xc0003b86e0) Stream added, broadcasting: 1\nI1122 23:01:43.390313 1624 log.go:172] (0xc000a46420) Reply frame received for 1\nI1122 23:01:43.390359 1624 log.go:172] (0xc000a46420) (0xc0002ae280) Create stream\nI1122 23:01:43.390373 1624 log.go:172] (0xc000a46420) (0xc0002ae280) Stream added, broadcasting: 3\nI1122 23:01:43.391357 1624 log.go:172] (0xc000a46420) Reply frame received for 3\nI1122 23:01:43.391408 1624 log.go:172] (0xc000a46420) (0xc0003b8000) Create stream\nI1122 23:01:43.391420 1624 log.go:172] (0xc000a46420) (0xc0003b8000) Stream added, broadcasting: 5\nI1122 23:01:43.392223 1624 log.go:172] (0xc000a46420) Reply frame received for 5\nI1122 23:01:43.477845 1624 log.go:172] (0xc000a46420) Data frame received for 5\nI1122 23:01:43.477876 1624 log.go:172] (0xc0003b8000) (5) Data frame handling\nI1122 23:01:43.477896 1624 log.go:172] (0xc0003b8000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1122 23:01:43.533852 1624 log.go:172] (0xc000a46420) Data frame received for 3\nI1122 23:01:43.533902 1624 log.go:172] (0xc0002ae280) (3) Data frame handling\nI1122 23:01:43.533946 1624 log.go:172] (0xc0002ae280) (3) Data frame sent\nI1122 23:01:43.534192 1624 log.go:172] (0xc000a46420) Data frame received for 3\nI1122 23:01:43.534236 1624 log.go:172] (0xc0002ae280) (3) Data frame handling\nI1122 23:01:43.534262 1624 log.go:172] (0xc000a46420) Data frame received for 5\nI1122 23:01:43.534274 1624 log.go:172] (0xc0003b8000) (5) Data frame handling\nI1122 23:01:43.536287 1624 log.go:172] (0xc000a46420) Data frame received for 1\nI1122 23:01:43.536343 1624 log.go:172] (0xc0003b86e0) (1) Data frame handling\nI1122 23:01:43.536375 1624 log.go:172] (0xc0003b86e0) (1) Data frame sent\nI1122 23:01:43.536401 1624 log.go:172] (0xc000a46420) (0xc0003b86e0) Stream removed, broadcasting: 1\nI1122 23:01:43.536436 1624 log.go:172] (0xc000a46420) Go away received\nI1122 23:01:43.536938 1624 log.go:172] (0xc000a46420) (0xc0003b86e0) Stream removed, broadcasting: 1\nI1122 23:01:43.536972 1624 log.go:172] (0xc000a46420) (0xc0002ae280) Stream removed, broadcasting: 3\nI1122 23:01:43.536983 1624 log.go:172] (0xc000a46420) (0xc0003b8000) Stream removed, broadcasting: 5\n" Nov 22 23:01:43.543: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Nov 22 23:01:43.543: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Nov 22 23:01:43.547: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Nov 22 23:01:53.552: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 22 23:01:53.552: INFO: Waiting for statefulset status.replicas updated to 0 Nov 22 23:01:53.567: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 23:01:53.567: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:43 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:33 +0000 UTC }] Nov 22 23:01:53.567: INFO: Nov 22 23:01:53.567: INFO: StatefulSet ss has not reached scale 3, at 1 Nov 22 23:01:54.572: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.9954041s Nov 22 23:01:55.578: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.990297294s Nov 22 23:01:56.583: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.984902715s Nov 22 23:01:57.588: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.979799017s Nov 22 23:01:58.595: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.974709751s Nov 22 23:01:59.600: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.967806089s Nov 22 23:02:00.605: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.962979665s Nov 22 23:02:01.610: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957706289s Nov 22 23:02:02.615: INFO: Verifying statefulset ss doesn't scale past 3 for another 952.595289ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3923 Nov 22 23:02:03.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:02:03.815: INFO: stderr: "I1122 23:02:03.737322 1645 log.go:172] (0xc0004a8630) (0xc000734be0) Create stream\nI1122 23:02:03.737373 1645 log.go:172] (0xc0004a8630) (0xc000734be0) Stream added, broadcasting: 1\nI1122 23:02:03.742159 1645 log.go:172] (0xc0004a8630) Reply frame received for 1\nI1122 23:02:03.742194 1645 log.go:172] (0xc0004a8630) (0xc000734320) Create stream\nI1122 23:02:03.742204 1645 log.go:172] (0xc0004a8630) (0xc000734320) Stream added, broadcasting: 3\nI1122 23:02:03.743122 1645 log.go:172] (0xc0004a8630) Reply frame received for 3\nI1122 23:02:03.743154 1645 log.go:172] (0xc0004a8630) (0xc0001a6000) Create stream\nI1122 23:02:03.743174 1645 log.go:172] (0xc0004a8630) (0xc0001a6000) Stream added, broadcasting: 5\nI1122 23:02:03.744037 1645 log.go:172] (0xc0004a8630) Reply frame received for 5\nI1122 23:02:03.806719 1645 log.go:172] (0xc0004a8630) Data frame received for 5\nI1122 23:02:03.806783 1645 log.go:172] (0xc0001a6000) (5) Data frame handling\nI1122 23:02:03.806807 1645 log.go:172] (0xc0001a6000) (5) Data frame sent\nI1122 23:02:03.806821 1645 log.go:172] (0xc0004a8630) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1122 23:02:03.806849 1645 log.go:172] (0xc0004a8630) Data frame received for 3\nI1122 23:02:03.806884 1645 log.go:172] (0xc000734320) (3) Data frame handling\nI1122 23:02:03.806911 1645 log.go:172] (0xc000734320) (3) Data frame sent\nI1122 23:02:03.806950 1645 log.go:172] (0xc0004a8630) Data frame received for 3\nI1122 23:02:03.806986 1645 log.go:172] (0xc000734320) (3) Data frame handling\nI1122 23:02:03.807015 1645 log.go:172] (0xc0001a6000) (5) Data frame handling\nI1122 23:02:03.808371 1645 log.go:172] (0xc0004a8630) Data frame received for 1\nI1122 23:02:03.808395 1645 log.go:172] (0xc000734be0) (1) Data frame handling\nI1122 23:02:03.808422 1645 log.go:172] (0xc000734be0) (1) Data frame sent\nI1122 23:02:03.808439 1645 log.go:172] (0xc0004a8630) (0xc000734be0) Stream removed, broadcasting: 1\nI1122 23:02:03.808467 1645 log.go:172] (0xc0004a8630) Go away received\nI1122 23:02:03.809027 1645 log.go:172] (0xc0004a8630) (0xc000734be0) Stream removed, broadcasting: 1\nI1122 23:02:03.809067 1645 log.go:172] (0xc0004a8630) (0xc000734320) Stream removed, broadcasting: 3\nI1122 23:02:03.809089 1645 log.go:172] (0xc0004a8630) (0xc0001a6000) Stream removed, broadcasting: 5\n" Nov 22 23:02:03.815: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Nov 22 23:02:03.815: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Nov 22 23:02:03.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:02:04.040: INFO: stderr: "I1122 23:02:03.943496 1667 log.go:172] (0xc0004b26e0) (0xc0004228c0) Create stream\nI1122 23:02:03.943548 1667 log.go:172] (0xc0004b26e0) (0xc0004228c0) Stream added, broadcasting: 1\nI1122 23:02:03.947208 1667 log.go:172] (0xc0004b26e0) Reply frame received for 1\nI1122 23:02:03.947249 1667 log.go:172] (0xc0004b26e0) (0xc0008d6000) Create stream\nI1122 23:02:03.947260 1667 log.go:172] (0xc0004b26e0) (0xc0008d6000) Stream added, broadcasting: 3\nI1122 23:02:03.948157 1667 log.go:172] (0xc0004b26e0) Reply frame received for 3\nI1122 23:02:03.948187 1667 log.go:172] (0xc0004b26e0) (0xc000422000) Create stream\nI1122 23:02:03.948197 1667 log.go:172] (0xc0004b26e0) (0xc000422000) Stream added, broadcasting: 5\nI1122 23:02:03.949148 1667 log.go:172] (0xc0004b26e0) Reply frame received for 5\nI1122 23:02:04.030014 1667 log.go:172] (0xc0004b26e0) Data frame received for 5\nI1122 23:02:04.030047 1667 log.go:172] (0xc000422000) (5) Data frame handling\nI1122 23:02:04.030067 1667 log.go:172] (0xc000422000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1122 23:02:04.031515 1667 log.go:172] (0xc0004b26e0) Data frame received for 5\nI1122 23:02:04.031547 1667 log.go:172] (0xc000422000) (5) Data frame handling\nI1122 23:02:04.031569 1667 log.go:172] (0xc000422000) (5) Data frame sent\nI1122 23:02:04.031585 1667 log.go:172] (0xc0004b26e0) Data frame received for 5\nI1122 23:02:04.031606 1667 log.go:172] (0xc000422000) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1122 23:02:04.031643 1667 log.go:172] (0xc0004b26e0) Data frame received for 3\nI1122 23:02:04.031693 1667 log.go:172] (0xc0008d6000) (3) Data frame handling\nI1122 23:02:04.031710 1667 log.go:172] (0xc0008d6000) (3) Data frame sent\nI1122 23:02:04.031763 1667 log.go:172] (0xc000422000) (5) Data frame sent\nI1122 23:02:04.031878 1667 log.go:172] (0xc0004b26e0) Data frame received for 5\nI1122 23:02:04.031976 1667 log.go:172] (0xc000422000) (5) Data frame handling\nI1122 23:02:04.032017 1667 log.go:172] (0xc0004b26e0) Data frame received for 3\nI1122 23:02:04.032041 1667 log.go:172] (0xc0008d6000) (3) Data frame handling\nI1122 23:02:04.033714 1667 log.go:172] (0xc0004b26e0) Data frame received for 1\nI1122 23:02:04.033748 1667 log.go:172] (0xc0004228c0) (1) Data frame handling\nI1122 23:02:04.033775 1667 log.go:172] (0xc0004228c0) (1) Data frame sent\nI1122 23:02:04.033810 1667 log.go:172] (0xc0004b26e0) (0xc0004228c0) Stream removed, broadcasting: 1\nI1122 23:02:04.033842 1667 log.go:172] (0xc0004b26e0) Go away received\nI1122 23:02:04.034254 1667 log.go:172] (0xc0004b26e0) (0xc0004228c0) Stream removed, broadcasting: 1\nI1122 23:02:04.034277 1667 log.go:172] (0xc0004b26e0) (0xc0008d6000) Stream removed, broadcasting: 3\nI1122 23:02:04.034286 1667 log.go:172] (0xc0004b26e0) (0xc000422000) Stream removed, broadcasting: 5\n" Nov 22 23:02:04.040: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Nov 22 23:02:04.040: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Nov 22 23:02:04.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:02:04.242: INFO: stderr: "I1122 23:02:04.173304 1689 log.go:172] (0xc0006a8370) (0xc0006123c0) Create stream\nI1122 23:02:04.173351 1689 log.go:172] (0xc0006a8370) (0xc0006123c0) Stream added, broadcasting: 1\nI1122 23:02:04.176629 1689 log.go:172] (0xc0006a8370) Reply frame received for 1\nI1122 23:02:04.176655 1689 log.go:172] (0xc0006a8370) (0xc000a5a000) Create stream\nI1122 23:02:04.176663 1689 log.go:172] (0xc0006a8370) (0xc000a5a000) Stream added, broadcasting: 3\nI1122 23:02:04.177666 1689 log.go:172] (0xc0006a8370) Reply frame received for 3\nI1122 23:02:04.177696 1689 log.go:172] (0xc0006a8370) (0xc0006121e0) Create stream\nI1122 23:02:04.177706 1689 log.go:172] (0xc0006a8370) (0xc0006121e0) Stream added, broadcasting: 5\nI1122 23:02:04.178751 1689 log.go:172] (0xc0006a8370) Reply frame received for 5\nI1122 23:02:04.232750 1689 log.go:172] (0xc0006a8370) Data frame received for 5\nI1122 23:02:04.232787 1689 log.go:172] (0xc0006121e0) (5) Data frame handling\nI1122 23:02:04.232810 1689 log.go:172] (0xc0006121e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI1122 23:02:04.233000 1689 log.go:172] (0xc0006a8370) Data frame received for 3\nI1122 23:02:04.233030 1689 log.go:172] (0xc000a5a000) (3) Data frame handling\nI1122 23:02:04.233051 1689 log.go:172] (0xc000a5a000) (3) Data frame sent\nI1122 23:02:04.233072 1689 log.go:172] (0xc0006a8370) Data frame received for 3\nI1122 23:02:04.233088 1689 log.go:172] (0xc000a5a000) (3) Data frame handling\nI1122 23:02:04.233126 1689 log.go:172] (0xc0006a8370) Data frame received for 5\nI1122 23:02:04.233145 1689 log.go:172] (0xc0006121e0) (5) Data frame handling\nI1122 23:02:04.235519 1689 log.go:172] (0xc0006a8370) Data frame received for 1\nI1122 23:02:04.235544 1689 log.go:172] (0xc0006123c0) (1) Data frame handling\nI1122 23:02:04.235566 1689 log.go:172] (0xc0006123c0) (1) Data frame sent\nI1122 23:02:04.235600 1689 log.go:172] (0xc0006a8370) (0xc0006123c0) Stream removed, broadcasting: 1\nI1122 23:02:04.235643 1689 log.go:172] (0xc0006a8370) Go away received\nI1122 23:02:04.236123 1689 log.go:172] (0xc0006a8370) (0xc0006123c0) Stream removed, broadcasting: 1\nI1122 23:02:04.236147 1689 log.go:172] (0xc0006a8370) (0xc000a5a000) Stream removed, broadcasting: 3\nI1122 23:02:04.236158 1689 log.go:172] (0xc0006a8370) (0xc0006121e0) Stream removed, broadcasting: 5\n" Nov 22 23:02:04.242: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Nov 22 23:02:04.242: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Nov 22 23:02:04.246: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Nov 22 23:02:14.271: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Nov 22 23:02:14.271: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Nov 22 23:02:14.271: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Nov 22 23:02:14.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Nov 22 23:02:14.501: INFO: stderr: "I1122 23:02:14.401309 1711 log.go:172] (0xc000a0c160) (0xc0008cc140) Create stream\nI1122 23:02:14.401370 1711 log.go:172] (0xc000a0c160) (0xc0008cc140) Stream added, broadcasting: 1\nI1122 23:02:14.403992 1711 log.go:172] (0xc000a0c160) Reply frame received for 1\nI1122 23:02:14.404041 1711 log.go:172] (0xc000a0c160) (0xc000922000) Create stream\nI1122 23:02:14.404059 1711 log.go:172] (0xc000a0c160) (0xc000922000) Stream added, broadcasting: 3\nI1122 23:02:14.405115 1711 log.go:172] (0xc000a0c160) Reply frame received for 3\nI1122 23:02:14.405170 1711 log.go:172] (0xc000a0c160) (0xc0009220a0) Create stream\nI1122 23:02:14.405184 1711 log.go:172] (0xc000a0c160) (0xc0009220a0) Stream added, broadcasting: 5\nI1122 23:02:14.406072 1711 log.go:172] (0xc000a0c160) Reply frame received for 5\nI1122 23:02:14.490255 1711 log.go:172] (0xc000a0c160) Data frame received for 5\nI1122 23:02:14.490280 1711 log.go:172] (0xc0009220a0) (5) Data frame handling\nI1122 23:02:14.490287 1711 log.go:172] (0xc0009220a0) (5) Data frame sent\nI1122 23:02:14.490293 1711 log.go:172] (0xc000a0c160) Data frame received for 5\nI1122 23:02:14.490297 1711 log.go:172] (0xc0009220a0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1122 23:02:14.490328 1711 log.go:172] (0xc000a0c160) Data frame received for 3\nI1122 23:02:14.490355 1711 log.go:172] (0xc000922000) (3) Data frame handling\nI1122 23:02:14.490380 1711 log.go:172] (0xc000922000) (3) Data frame sent\nI1122 23:02:14.490401 1711 log.go:172] (0xc000a0c160) Data frame received for 3\nI1122 23:02:14.490413 1711 log.go:172] (0xc000922000) (3) Data frame handling\nI1122 23:02:14.492069 1711 log.go:172] (0xc000a0c160) Data frame received for 1\nI1122 23:02:14.492091 1711 log.go:172] (0xc0008cc140) (1) Data frame handling\nI1122 23:02:14.492101 1711 log.go:172] (0xc0008cc140) (1) Data frame sent\nI1122 23:02:14.492112 1711 log.go:172] (0xc000a0c160) (0xc0008cc140) Stream removed, broadcasting: 1\nI1122 23:02:14.492126 1711 log.go:172] (0xc000a0c160) Go away received\nI1122 23:02:14.492572 1711 log.go:172] (0xc000a0c160) (0xc0008cc140) Stream removed, broadcasting: 1\nI1122 23:02:14.492604 1711 log.go:172] (0xc000a0c160) (0xc000922000) Stream removed, broadcasting: 3\nI1122 23:02:14.492619 1711 log.go:172] (0xc000a0c160) (0xc0009220a0) Stream removed, broadcasting: 5\n" Nov 22 23:02:14.502: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Nov 22 23:02:14.502: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Nov 22 23:02:14.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Nov 22 23:02:14.762: INFO: stderr: "I1122 23:02:14.653199 1733 log.go:172] (0xc000958420) (0xc0001fa640) Create stream\nI1122 23:02:14.653241 1733 log.go:172] (0xc000958420) (0xc0001fa640) Stream added, broadcasting: 1\nI1122 23:02:14.655917 1733 log.go:172] (0xc000958420) Reply frame received for 1\nI1122 23:02:14.655947 1733 log.go:172] (0xc000958420) (0xc0005ee140) Create stream\nI1122 23:02:14.655957 1733 log.go:172] (0xc000958420) (0xc0005ee140) Stream added, broadcasting: 3\nI1122 23:02:14.656982 1733 log.go:172] (0xc000958420) Reply frame received for 3\nI1122 23:02:14.657022 1733 log.go:172] (0xc000958420) (0xc0008e2000) Create stream\nI1122 23:02:14.657034 1733 log.go:172] (0xc000958420) (0xc0008e2000) Stream added, broadcasting: 5\nI1122 23:02:14.657901 1733 log.go:172] (0xc000958420) Reply frame received for 5\nI1122 23:02:14.718295 1733 log.go:172] (0xc000958420) Data frame received for 5\nI1122 23:02:14.718331 1733 log.go:172] (0xc0008e2000) (5) Data frame handling\nI1122 23:02:14.718350 1733 log.go:172] (0xc0008e2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1122 23:02:14.753115 1733 log.go:172] (0xc000958420) Data frame received for 3\nI1122 23:02:14.753160 1733 log.go:172] (0xc0005ee140) (3) Data frame handling\nI1122 23:02:14.753188 1733 log.go:172] (0xc0005ee140) (3) Data frame sent\nI1122 23:02:14.753353 1733 log.go:172] (0xc000958420) Data frame received for 3\nI1122 23:02:14.753381 1733 log.go:172] (0xc0005ee140) (3) Data frame handling\nI1122 23:02:14.753618 1733 log.go:172] (0xc000958420) Data frame received for 5\nI1122 23:02:14.753646 1733 log.go:172] (0xc0008e2000) (5) Data frame handling\nI1122 23:02:14.755320 1733 log.go:172] (0xc000958420) Data frame received for 1\nI1122 23:02:14.755353 1733 log.go:172] (0xc0001fa640) (1) Data frame handling\nI1122 23:02:14.755362 1733 log.go:172] (0xc0001fa640) (1) Data frame sent\nI1122 23:02:14.755378 1733 log.go:172] (0xc000958420) (0xc0001fa640) Stream removed, broadcasting: 1\nI1122 23:02:14.755443 1733 log.go:172] (0xc000958420) Go away received\nI1122 23:02:14.755659 1733 log.go:172] (0xc000958420) (0xc0001fa640) Stream removed, broadcasting: 1\nI1122 23:02:14.755672 1733 log.go:172] (0xc000958420) (0xc0005ee140) Stream removed, broadcasting: 3\nI1122 23:02:14.755677 1733 log.go:172] (0xc000958420) (0xc0008e2000) Stream removed, broadcasting: 5\n" Nov 22 23:02:14.762: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Nov 22 23:02:14.762: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Nov 22 23:02:14.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Nov 22 23:02:15.001: INFO: stderr: "I1122 23:02:14.900362 1753 log.go:172] (0xc00010c6e0) (0xc00090e6e0) Create stream\nI1122 23:02:14.900424 1753 log.go:172] (0xc00010c6e0) (0xc00090e6e0) Stream added, broadcasting: 1\nI1122 23:02:14.910019 1753 log.go:172] (0xc00010c6e0) Reply frame received for 1\nI1122 23:02:14.910088 1753 log.go:172] (0xc00010c6e0) (0xc00090e780) Create stream\nI1122 23:02:14.910103 1753 log.go:172] (0xc00010c6e0) (0xc00090e780) Stream added, broadcasting: 3\nI1122 23:02:14.911433 1753 log.go:172] (0xc00010c6e0) Reply frame received for 3\nI1122 23:02:14.911479 1753 log.go:172] (0xc00010c6e0) (0xc00090e820) Create stream\nI1122 23:02:14.911492 1753 log.go:172] (0xc00010c6e0) (0xc00090e820) Stream added, broadcasting: 5\nI1122 23:02:14.915815 1753 log.go:172] (0xc00010c6e0) Reply frame received for 5\nI1122 23:02:14.959402 1753 log.go:172] (0xc00010c6e0) Data frame received for 5\nI1122 23:02:14.959432 1753 log.go:172] (0xc00090e820) (5) Data frame handling\nI1122 23:02:14.959450 1753 log.go:172] (0xc00090e820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1122 23:02:14.994039 1753 log.go:172] (0xc00010c6e0) Data frame received for 5\nI1122 23:02:14.994075 1753 log.go:172] (0xc00090e820) (5) Data frame handling\nI1122 23:02:14.994110 1753 log.go:172] (0xc00010c6e0) Data frame received for 3\nI1122 23:02:14.994145 1753 log.go:172] (0xc00090e780) (3) Data frame handling\nI1122 23:02:14.994171 1753 log.go:172] (0xc00090e780) (3) Data frame sent\nI1122 23:02:14.994251 1753 log.go:172] (0xc00010c6e0) Data frame received for 3\nI1122 23:02:14.994270 1753 log.go:172] (0xc00090e780) (3) Data frame handling\nI1122 23:02:14.995695 1753 log.go:172] (0xc00010c6e0) Data frame received for 1\nI1122 23:02:14.995715 1753 log.go:172] (0xc00090e6e0) (1) Data frame handling\nI1122 23:02:14.995725 1753 log.go:172] (0xc00090e6e0) (1) Data frame sent\nI1122 23:02:14.995737 1753 log.go:172] (0xc00010c6e0) (0xc00090e6e0) Stream removed, broadcasting: 1\nI1122 23:02:14.995799 1753 log.go:172] (0xc00010c6e0) Go away received\nI1122 23:02:14.995988 1753 log.go:172] (0xc00010c6e0) (0xc00090e6e0) Stream removed, broadcasting: 1\nI1122 23:02:14.996000 1753 log.go:172] (0xc00010c6e0) (0xc00090e780) Stream removed, broadcasting: 3\nI1122 23:02:14.996008 1753 log.go:172] (0xc00010c6e0) (0xc00090e820) Stream removed, broadcasting: 5\n" Nov 22 23:02:15.001: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Nov 22 23:02:15.001: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Nov 22 23:02:15.001: INFO: Waiting for statefulset status.replicas updated to 0 Nov 22 23:02:15.004: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Nov 22 23:02:25.011: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Nov 22 23:02:25.011: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Nov 22 23:02:25.011: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Nov 22 23:02:25.030: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 23:02:25.030: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:33 +0000 UTC }] Nov 22 23:02:25.030: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:25.030: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:25.030: INFO: Nov 22 23:02:25.030: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 22 23:02:26.034: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 23:02:26.034: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:33 +0000 UTC }] Nov 22 23:02:26.034: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:26.035: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:26.035: INFO: Nov 22 23:02:26.035: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 22 23:02:27.039: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 23:02:27.039: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:33 +0000 UTC }] Nov 22 23:02:27.039: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:27.039: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:27.039: INFO: Nov 22 23:02:27.039: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 22 23:02:28.043: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 23:02:28.043: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:33 +0000 UTC }] Nov 22 23:02:28.043: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:28.043: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:28.043: INFO: Nov 22 23:02:28.043: INFO: StatefulSet ss has not reached scale 0, at 3 Nov 22 23:02:29.047: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 23:02:29.047: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:29.048: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:29.048: INFO: Nov 22 23:02:29.048: INFO: StatefulSet ss has not reached scale 0, at 2 Nov 22 23:02:30.051: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 23:02:30.051: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:30.052: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:30.052: INFO: Nov 22 23:02:30.052: INFO: StatefulSet ss has not reached scale 0, at 2 Nov 22 23:02:31.057: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 23:02:31.057: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:31.057: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:31.057: INFO: Nov 22 23:02:31.057: INFO: StatefulSet ss has not reached scale 0, at 2 Nov 22 23:02:32.062: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 23:02:32.062: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:32.062: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:32.062: INFO: Nov 22 23:02:32.062: INFO: StatefulSet ss has not reached scale 0, at 2 Nov 22 23:02:33.068: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 23:02:33.068: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:33.068: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:33.068: INFO: Nov 22 23:02:33.068: INFO: StatefulSet ss has not reached scale 0, at 2 Nov 22 23:02:34.078: INFO: POD NODE PHASE GRACE CONDITIONS Nov 22 23:02:34.078: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:34.078: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:54 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:02:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:01:53 +0000 UTC }] Nov 22 23:02:34.078: INFO: Nov 22 23:02:34.078: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3923 Nov 22 23:02:35.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:02:35.228: INFO: rc: 1 Nov 22 23:02:35.228: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00250acf0 exit status 1 true [0xc000750d30 0xc000750dd8 0xc000750e30] [0xc000750d30 0xc000750dd8 0xc000750e30] [0xc000750d80 0xc000750e28] [0xba70e0 0xba70e0] 0xc0019f66c0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Nov 22 23:02:45.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:02:45.365: INFO: rc: 1 Nov 22 23:02:45.365: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00250adb0 exit status 1 true [0xc000750e40 0xc000750eb0 0xc000750f08] [0xc000750e40 0xc000750eb0 0xc000750f08] [0xc000750e88 0xc000750ed8] [0xba70e0 0xba70e0] 0xc0019f6a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:02:55.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:02:55.463: INFO: rc: 1 Nov 22 23:02:55.463: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001e833b0 exit status 1 true [0xc00055c388 0xc00055c410 0xc00055c438] [0xc00055c388 0xc00055c410 0xc00055c438] [0xc00055c3f0 0xc00055c430] [0xba70e0 0xba70e0] 0xc002df5320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:03:05.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:03:05.564: INFO: rc: 1 Nov 22 23:03:05.564: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001e834a0 exit status 1 true [0xc00055c440 0xc00055c458 0xc00055c470] [0xc00055c440 0xc00055c458 0xc00055c470] [0xc00055c450 0xc00055c468] [0xba70e0 0xba70e0] 0xc002df5ec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:03:15.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:03:15.660: INFO: rc: 1 Nov 22 23:03:15.660: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001e836b0 exit status 1 true [0xc00055c478 0xc00055c4c0 0xc00055c4e8] [0xc00055c478 0xc00055c4c0 0xc00055c4e8] [0xc00055c4a0 0xc00055c4e0] [0xba70e0 0xba70e0] 0xc002618300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:03:25.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:03:25.759: INFO: rc: 1 Nov 22 23:03:25.759: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002e70ea0 exit status 1 true [0xc002b7e040 0xc002b7e058 0xc002b7e070] [0xc002b7e040 0xc002b7e058 0xc002b7e070] [0xc002b7e050 0xc002b7e068] [0xba70e0 0xba70e0] 0xc003b29ec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:03:35.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:03:35.859: INFO: rc: 1 Nov 22 23:03:35.859: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00250aea0 exit status 1 true [0xc000750f18 0xc000750f80 0xc000750fd8] [0xc000750f18 0xc000750f80 0xc000750fd8] [0xc000750f70 0xc000750fc0] [0xba70e0 0xba70e0] 0xc0019f7380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:03:45.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:03:45.966: INFO: rc: 1 Nov 22 23:03:45.966: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00250af90 exit status 1 true [0xc000750ff0 0xc000751068 0xc000751118] [0xc000750ff0 0xc000751068 0xc000751118] [0xc000751020 0xc0007510f8] [0xba70e0 0xba70e0] 0xc0019f7920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:03:55.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:03:56.082: INFO: rc: 1 Nov 22 23:03:56.082: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a6c090 exit status 1 true [0xc00096e2f8 0xc00096e5b0 0xc00096ed88] [0xc00096e2f8 0xc00096e5b0 0xc00096ed88] [0xc00096e578 0xc00096e978] [0xba70e0 0xba70e0] 0xc002df4540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:04:06.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:04:06.188: INFO: rc: 1 Nov 22 23:04:06.188: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00287e0c0 exit status 1 true [0xc00056ced0 0xc00056cfd0 0xc00056d040] [0xc00056ced0 0xc00056cfd0 0xc00056d040] [0xc00056cf90 0xc00056cff8] [0xba70e0 0xba70e0] 0xc002249560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:04:16.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:04:16.288: INFO: rc: 1 Nov 22 23:04:16.288: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00287e180 exit status 1 true [0xc00056d090 0xc00056d0e8 0xc00056d350] [0xc00056d090 0xc00056d0e8 0xc00056d350] [0xc00056d0d8 0xc00056d2d0] [0xba70e0 0xba70e0] 0xc002b7a5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:04:26.289: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:04:26.387: INFO: rc: 1 Nov 22 23:04:26.387: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0014100c0 exit status 1 true [0xc000750060 0xc000750188 0xc0007501b8] [0xc000750060 0xc000750188 0xc0007501b8] [0xc000750148 0xc0007501a8] [0xba70e0 0xba70e0] 0xc003cec2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:04:36.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:04:36.491: INFO: rc: 1 Nov 22 23:04:36.491: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00287e270 exit status 1 true [0xc00056d360 0xc00056d4b8 0xc00056d5a8] [0xc00056d360 0xc00056d4b8 0xc00056d5a8] [0xc00056d470 0xc00056d588] [0xba70e0 0xba70e0] 0xc002b7aa20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:04:46.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:04:46.589: INFO: rc: 1 Nov 22 23:04:46.589: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00287e360 exit status 1 true [0xc00056d5f8 0xc00056d6f8 0xc00056d758] [0xc00056d5f8 0xc00056d6f8 0xc00056d758] [0xc00056d6c0 0xc00056d730] [0xba70e0 0xba70e0] 0xc002b7ad80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:04:56.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:04:56.688: INFO: rc: 1 Nov 22 23:04:56.688: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a48210 exit status 1 true [0xc00055c058 0xc00055c088 0xc00055c0b0] [0xc00055c058 0xc00055c088 0xc00055c0b0] [0xc00055c080 0xc00055c098] [0xba70e0 0xba70e0] 0xc0019f6600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:05:06.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:05:06.797: INFO: rc: 1 Nov 22 23:05:06.797: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a6c180 exit status 1 true [0xc00096ee28 0xc00096f040 0xc00096f390] [0xc00096ee28 0xc00096f040 0xc00096f390] [0xc00096efc8 0xc00096f1c0] [0xba70e0 0xba70e0] 0xc002df48a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:05:16.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:05:16.892: INFO: rc: 1 Nov 22 23:05:16.892: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00287e450 exit status 1 true [0xc00056d7b8 0xc00056d888 0xc00056d910] [0xc00056d7b8 0xc00056d888 0xc00056d910] [0xc00056d820 0xc00056d8e0] [0xba70e0 0xba70e0] 0xc002b7b0e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:05:26.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:05:26.995: INFO: rc: 1 Nov 22 23:05:26.995: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a6c270 exit status 1 true [0xc00096f478 0xc00096f638 0xc00096f800] [0xc00096f478 0xc00096f638 0xc00096f800] [0xc00096f5b0 0xc00096f7b8] [0xba70e0 0xba70e0] 0xc002df5140 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:05:36.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:05:37.087: INFO: rc: 1 Nov 22 23:05:37.087: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001410180 exit status 1 true [0xc000750220 0xc000750368 0xc0007503d0] [0xc000750220 0xc000750368 0xc0007503d0] [0xc000750350 0xc0007503a0] [0xba70e0 0xba70e0] 0xc003cec720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:05:47.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:05:47.180: INFO: rc: 1 Nov 22 23:05:47.180: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc001410240 exit status 1 true [0xc0007503e0 0xc000750458 0xc0007504c8] [0xc0007503e0 0xc000750458 0xc0007504c8] [0xc000750400 0xc0007504a8] [0xba70e0 0xba70e0] 0xc003ceca80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:05:57.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:05:57.291: INFO: rc: 1 Nov 22 23:05:57.291: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a481e0 exit status 1 true [0xc00055c078 0xc00055c090 0xc00055c0c0] [0xc00055c078 0xc00055c090 0xc00055c0c0] [0xc00055c088 0xc00055c0b0] [0xba70e0 0xba70e0] 0xc002249560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:06:07.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:06:09.940: INFO: rc: 1 Nov 22 23:06:09.940: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0014100f0 exit status 1 true [0xc000750060 0xc000750188 0xc0007501b8] [0xc000750060 0xc000750188 0xc0007501b8] [0xc000750148 0xc0007501a8] [0xba70e0 0xba70e0] 0xc0019f6600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:06:19.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:06:20.040: INFO: rc: 1 Nov 22 23:06:20.041: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc0014101e0 exit status 1 true [0xc000750220 0xc000750368 0xc0007503d0] [0xc000750220 0xc000750368 0xc0007503d0] [0xc000750350 0xc0007503a0] [0xba70e0 0xba70e0] 0xc0019f6960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:06:30.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:06:30.148: INFO: rc: 1 Nov 22 23:06:30.148: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a6c0c0 exit status 1 true [0xc00056ced0 0xc00056cfd0 0xc00056d040] [0xc00056ced0 0xc00056cfd0 0xc00056d040] [0xc00056cf90 0xc00056cff8] [0xba70e0 0xba70e0] 0xc003cec2a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:06:40.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:06:40.247: INFO: rc: 1 Nov 22 23:06:40.247: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a6c1b0 exit status 1 true [0xc00056d090 0xc00056d0e8 0xc00056d350] [0xc00056d090 0xc00056d0e8 0xc00056d350] [0xc00056d0d8 0xc00056d2d0] [0xba70e0 0xba70e0] 0xc003cec720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:06:50.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:06:50.345: INFO: rc: 1 Nov 22 23:06:50.345: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a48330 exit status 1 true [0xc00055c0d0 0xc00055c108 0xc00055c120] [0xc00055c0d0 0xc00055c108 0xc00055c120] [0xc00055c0f8 0xc00055c118] [0xba70e0 0xba70e0] 0xc002b7a5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:07:00.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:07:00.442: INFO: rc: 1 Nov 22 23:07:00.443: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a48420 exit status 1 true [0xc00055c158 0xc00055c1c8 0xc00055c288] [0xc00055c158 0xc00055c1c8 0xc00055c288] [0xc00055c198 0xc00055c280] [0xba70e0 0xba70e0] 0xc002b7aa20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:07:10.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:07:10.550: INFO: rc: 1 Nov 22 23:07:10.550: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a6c2d0 exit status 1 true [0xc00056d360 0xc00056d4b8 0xc00056d5a8] [0xc00056d360 0xc00056d4b8 0xc00056d5a8] [0xc00056d470 0xc00056d588] [0xba70e0 0xba70e0] 0xc003ceca80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:07:20.551: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:07:20.652: INFO: rc: 1 Nov 22 23:07:20.653: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc002a48510 exit status 1 true [0xc00055c290 0xc00055c2f8 0xc00055c358] [0xc00055c290 0xc00055c2f8 0xc00055c358] [0xc00055c2e0 0xc00055c328] [0xba70e0 0xba70e0] 0xc002b7ad80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:07:30.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:07:30.754: INFO: rc: 1 Nov 22 23:07:30.754: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-1" not found [] 0xc00287e120 exit status 1 true [0xc00096e050 0xc00096e578 0xc00096e978] [0xc00096e050 0xc00096e578 0xc00096e978] [0xc00096e510 0xc00096e660] [0xba70e0 0xba70e0] 0xc002df4540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-1" not found error: exit status 1 Nov 22 23:07:40.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3923 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Nov 22 23:07:40.838: INFO: rc: 1 Nov 22 23:07:40.838: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: Nov 22 23:07:40.838: INFO: Scaling statefulset ss to 0 Nov 22 23:07:40.845: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Nov 22 23:07:40.847: INFO: Deleting all statefulset in ns statefulset-3923 Nov 22 23:07:40.848: INFO: Scaling statefulset ss to 0 Nov 22 23:07:40.854: INFO: Waiting for statefulset status.replicas updated to 0 Nov 22 23:07:40.855: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:07:40.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3923" for this suite. Nov 22 23:07:46.881: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:07:46.952: INFO: namespace statefulset-3923 deletion completed in 6.081827191s • [SLOW TEST:373.801 seconds] [sig-apps] StatefulSet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:07:46.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Nov 22 23:07:46.989: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:07:54.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3866" for this suite. Nov 22 23:08:00.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:08:00.762: INFO: namespace init-container-3866 deletion completed in 6.090186024s • [SLOW TEST:13.809 seconds] [k8s.io] InitContainer [NodeConformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:08:00.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 22 23:08:00.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Nov 22 23:08:00.992: INFO: stderr: "" Nov 22 23:08:00.992: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.12\", GitCommit:\"e2a822d9f3c2fdb5c9bfbe64313cf9f657f0a725\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T05:17:59Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-01T02:31:02Z\", GoVersion:\"go1.12.17\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:08:00.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4404" for this suite. Nov 22 23:08:07.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:08:07.091: INFO: namespace kubectl-4404 deletion completed in 6.094986444s • [SLOW TEST:6.329 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:08:07.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ecea19ed-7f74-4997-acf5-e61070f449c3 STEP: Creating a pod to test consume configMaps Nov 22 23:08:07.174: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6252f54a-b7d8-4255-83da-c3523249ab23" in namespace "projected-2689" to be "success or failure" Nov 22 23:08:07.183: INFO: Pod "pod-projected-configmaps-6252f54a-b7d8-4255-83da-c3523249ab23": Phase="Pending", Reason="", readiness=false. Elapsed: 8.965818ms Nov 22 23:08:09.186: INFO: Pod "pod-projected-configmaps-6252f54a-b7d8-4255-83da-c3523249ab23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01264737s Nov 22 23:08:11.191: INFO: Pod "pod-projected-configmaps-6252f54a-b7d8-4255-83da-c3523249ab23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01695361s STEP: Saw pod success Nov 22 23:08:11.191: INFO: Pod "pod-projected-configmaps-6252f54a-b7d8-4255-83da-c3523249ab23" satisfied condition "success or failure" Nov 22 23:08:11.194: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-6252f54a-b7d8-4255-83da-c3523249ab23 container projected-configmap-volume-test: STEP: delete the pod Nov 22 23:08:11.226: INFO: Waiting for pod pod-projected-configmaps-6252f54a-b7d8-4255-83da-c3523249ab23 to disappear Nov 22 23:08:11.237: INFO: Pod pod-projected-configmaps-6252f54a-b7d8-4255-83da-c3523249ab23 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:08:11.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2689" for this suite. Nov 22 23:08:17.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:08:17.349: INFO: namespace projected-2689 deletion completed in 6.107057469s • [SLOW TEST:10.257 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:08:17.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-c3d00088-760b-400c-97cc-a12cb77d3617 in namespace container-probe-5260 Nov 22 23:08:21.454: INFO: Started pod busybox-c3d00088-760b-400c-97cc-a12cb77d3617 in namespace container-probe-5260 STEP: checking the pod's current state and verifying that restartCount is present Nov 22 23:08:21.457: INFO: Initial restart count of pod busybox-c3d00088-760b-400c-97cc-a12cb77d3617 is 0 Nov 22 23:09:15.678: INFO: Restart count of pod container-probe-5260/busybox-c3d00088-760b-400c-97cc-a12cb77d3617 is now 1 (54.220628281s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:09:15.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5260" for this suite. Nov 22 23:09:21.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:09:21.815: INFO: namespace container-probe-5260 deletion completed in 6.11902515s • [SLOW TEST:64.466 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:09:21.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-d6602932-5286-4487-b246-7cf2a83179d1 STEP: Creating a pod to test consume secrets Nov 22 23:09:21.908: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6b1144f1-ae88-48aa-8ca7-39fe5f8c816e" in namespace "projected-13" to be "success or failure" Nov 22 23:09:21.915: INFO: Pod "pod-projected-secrets-6b1144f1-ae88-48aa-8ca7-39fe5f8c816e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.733118ms Nov 22 23:09:23.939: INFO: Pod "pod-projected-secrets-6b1144f1-ae88-48aa-8ca7-39fe5f8c816e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031138677s Nov 22 23:09:25.942: INFO: Pod "pod-projected-secrets-6b1144f1-ae88-48aa-8ca7-39fe5f8c816e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034383441s STEP: Saw pod success Nov 22 23:09:25.942: INFO: Pod "pod-projected-secrets-6b1144f1-ae88-48aa-8ca7-39fe5f8c816e" satisfied condition "success or failure" Nov 22 23:09:25.944: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-6b1144f1-ae88-48aa-8ca7-39fe5f8c816e container projected-secret-volume-test: STEP: delete the pod Nov 22 23:09:25.976: INFO: Waiting for pod pod-projected-secrets-6b1144f1-ae88-48aa-8ca7-39fe5f8c816e to disappear Nov 22 23:09:26.004: INFO: Pod pod-projected-secrets-6b1144f1-ae88-48aa-8ca7-39fe5f8c816e no longer exists [AfterEach] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:09:26.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-13" for this suite. Nov 22 23:09:32.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:09:32.126: INFO: namespace projected-13 deletion completed in 6.118856807s • [SLOW TEST:10.311 seconds] [sig-storage] Projected secret /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:09:32.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 22 23:09:32.204: INFO: Waiting up to 5m0s for pod "downwardapi-volume-53af656f-e623-4962-b5f8-5394e071941c" in namespace "projected-5739" to be "success or failure" Nov 22 23:09:32.249: INFO: Pod "downwardapi-volume-53af656f-e623-4962-b5f8-5394e071941c": Phase="Pending", Reason="", readiness=false. Elapsed: 44.692318ms Nov 22 23:09:34.281: INFO: Pod "downwardapi-volume-53af656f-e623-4962-b5f8-5394e071941c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076250741s Nov 22 23:09:36.285: INFO: Pod "downwardapi-volume-53af656f-e623-4962-b5f8-5394e071941c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.081057137s STEP: Saw pod success Nov 22 23:09:36.285: INFO: Pod "downwardapi-volume-53af656f-e623-4962-b5f8-5394e071941c" satisfied condition "success or failure" Nov 22 23:09:36.289: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-53af656f-e623-4962-b5f8-5394e071941c container client-container: STEP: delete the pod Nov 22 23:09:36.348: INFO: Waiting for pod downwardapi-volume-53af656f-e623-4962-b5f8-5394e071941c to disappear Nov 22 23:09:36.355: INFO: Pod downwardapi-volume-53af656f-e623-4962-b5f8-5394e071941c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:09:36.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5739" for this suite. Nov 22 23:09:42.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:09:42.447: INFO: namespace projected-5739 deletion completed in 6.088563102s • [SLOW TEST:10.320 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:09:42.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-8a6ba35d-bf1d-4840-beaf-8a09c94bd7ad in namespace container-probe-623 Nov 22 23:09:46.512: INFO: Started pod liveness-8a6ba35d-bf1d-4840-beaf-8a09c94bd7ad in namespace container-probe-623 STEP: checking the pod's current state and verifying that restartCount is present Nov 22 23:09:46.515: INFO: Initial restart count of pod liveness-8a6ba35d-bf1d-4840-beaf-8a09c94bd7ad is 0 Nov 22 23:10:10.566: INFO: Restart count of pod container-probe-623/liveness-8a6ba35d-bf1d-4840-beaf-8a09c94bd7ad is now 1 (24.051321418s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:10:10.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-623" for this suite. Nov 22 23:10:16.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:10:16.679: INFO: namespace container-probe-623 deletion completed in 6.076965641s • [SLOW TEST:34.232 seconds] [k8s.io] Probing container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:10:16.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Nov 22 23:10:16.741: INFO: Waiting up to 5m0s for pod "pod-10458331-ea65-42b2-84fd-cef36659febb" in namespace "emptydir-9192" to be "success or failure" Nov 22 23:10:16.765: INFO: Pod "pod-10458331-ea65-42b2-84fd-cef36659febb": Phase="Pending", Reason="", readiness=false. Elapsed: 23.775062ms Nov 22 23:10:18.769: INFO: Pod "pod-10458331-ea65-42b2-84fd-cef36659febb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02764963s Nov 22 23:10:20.772: INFO: Pod "pod-10458331-ea65-42b2-84fd-cef36659febb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031108809s STEP: Saw pod success Nov 22 23:10:20.772: INFO: Pod "pod-10458331-ea65-42b2-84fd-cef36659febb" satisfied condition "success or failure" Nov 22 23:10:20.775: INFO: Trying to get logs from node iruya-worker2 pod pod-10458331-ea65-42b2-84fd-cef36659febb container test-container: STEP: delete the pod Nov 22 23:10:20.795: INFO: Waiting for pod pod-10458331-ea65-42b2-84fd-cef36659febb to disappear Nov 22 23:10:20.799: INFO: Pod pod-10458331-ea65-42b2-84fd-cef36659febb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:10:20.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9192" for this suite. Nov 22 23:10:26.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:10:26.890: INFO: namespace emptydir-9192 deletion completed in 6.08670812s • [SLOW TEST:10.210 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:10:26.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Nov 22 23:10:26.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4221' Nov 22 23:10:27.234: INFO: stderr: "" Nov 22 23:10:27.234: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Nov 22 23:10:28.238: INFO: Selector matched 1 pods for map[app:redis] Nov 22 23:10:28.238: INFO: Found 0 / 1 Nov 22 23:10:29.269: INFO: Selector matched 1 pods for map[app:redis] Nov 22 23:10:29.269: INFO: Found 0 / 1 Nov 22 23:10:30.237: INFO: Selector matched 1 pods for map[app:redis] Nov 22 23:10:30.237: INFO: Found 0 / 1 Nov 22 23:10:31.239: INFO: Selector matched 1 pods for map[app:redis] Nov 22 23:10:31.239: INFO: Found 1 / 1 Nov 22 23:10:31.239: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Nov 22 23:10:31.242: INFO: Selector matched 1 pods for map[app:redis] Nov 22 23:10:31.242: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Nov 22 23:10:31.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-j4k7j --namespace=kubectl-4221 -p {"metadata":{"annotations":{"x":"y"}}}' Nov 22 23:10:31.342: INFO: stderr: "" Nov 22 23:10:31.342: INFO: stdout: "pod/redis-master-j4k7j patched\n" STEP: checking annotations Nov 22 23:10:31.352: INFO: Selector matched 1 pods for map[app:redis] Nov 22 23:10:31.352: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:10:31.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4221" for this suite. Nov 22 23:10:53.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:10:53.442: INFO: namespace kubectl-4221 deletion completed in 22.086351986s • [SLOW TEST:26.552 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:10:53.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-ldjx STEP: Creating a pod to test atomic-volume-subpath Nov 22 23:10:53.535: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-ldjx" in namespace "subpath-4348" to be "success or failure" Nov 22 23:10:53.576: INFO: Pod "pod-subpath-test-secret-ldjx": Phase="Pending", Reason="", readiness=false. Elapsed: 40.389438ms Nov 22 23:10:55.640: INFO: Pod "pod-subpath-test-secret-ldjx": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105343577s Nov 22 23:10:57.645: INFO: Pod "pod-subpath-test-secret-ldjx": Phase="Running", Reason="", readiness=true. Elapsed: 4.109524959s Nov 22 23:10:59.648: INFO: Pod "pod-subpath-test-secret-ldjx": Phase="Running", Reason="", readiness=true. Elapsed: 6.113123272s Nov 22 23:11:01.653: INFO: Pod "pod-subpath-test-secret-ldjx": Phase="Running", Reason="", readiness=true. Elapsed: 8.117404212s Nov 22 23:11:03.656: INFO: Pod "pod-subpath-test-secret-ldjx": Phase="Running", Reason="", readiness=true. Elapsed: 10.121266159s Nov 22 23:11:05.661: INFO: Pod "pod-subpath-test-secret-ldjx": Phase="Running", Reason="", readiness=true. Elapsed: 12.125743589s Nov 22 23:11:07.665: INFO: Pod "pod-subpath-test-secret-ldjx": Phase="Running", Reason="", readiness=true. Elapsed: 14.129850681s Nov 22 23:11:09.669: INFO: Pod "pod-subpath-test-secret-ldjx": Phase="Running", Reason="", readiness=true. Elapsed: 16.134290801s Nov 22 23:11:11.674: INFO: Pod "pod-subpath-test-secret-ldjx": Phase="Running", Reason="", readiness=true. Elapsed: 18.138663471s Nov 22 23:11:13.678: INFO: Pod "pod-subpath-test-secret-ldjx": Phase="Running", Reason="", readiness=true. Elapsed: 20.143229679s Nov 22 23:11:15.682: INFO: Pod "pod-subpath-test-secret-ldjx": Phase="Running", Reason="", readiness=true. Elapsed: 22.147281061s Nov 22 23:11:17.687: INFO: Pod "pod-subpath-test-secret-ldjx": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.151963462s STEP: Saw pod success Nov 22 23:11:17.687: INFO: Pod "pod-subpath-test-secret-ldjx" satisfied condition "success or failure" Nov 22 23:11:17.690: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-ldjx container test-container-subpath-secret-ldjx: STEP: delete the pod Nov 22 23:11:17.723: INFO: Waiting for pod pod-subpath-test-secret-ldjx to disappear Nov 22 23:11:17.735: INFO: Pod pod-subpath-test-secret-ldjx no longer exists STEP: Deleting pod pod-subpath-test-secret-ldjx Nov 22 23:11:17.735: INFO: Deleting pod "pod-subpath-test-secret-ldjx" in namespace "subpath-4348" [AfterEach] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:11:17.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4348" for this suite. Nov 22 23:11:23.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:11:23.876: INFO: namespace subpath-4348 deletion completed in 6.135576979s • [SLOW TEST:30.434 seconds] [sig-storage] Subpath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:11:23.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:11:28.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7257" for this suite. Nov 22 23:12:06.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:12:06.114: INFO: namespace kubelet-test-7257 deletion completed in 38.10985353s • [SLOW TEST:42.238 seconds] [k8s.io] Kubelet /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:12:06.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Nov 22 23:12:06.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Nov 22 23:12:06.320: INFO: stderr: "" Nov 22 23:12:06.320: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37711\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:37711/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:12:06.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7289" for this suite. Nov 22 23:12:12.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:12:12.426: INFO: namespace kubectl-7289 deletion completed in 6.101549751s • [SLOW TEST:6.311 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:12:12.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Nov 22 23:12:16.522: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-ae8a4d15-6211-4e30-a795-ce146f4145f7,GenerateName:,Namespace:events-7362,SelfLink:/api/v1/namespaces/events-7362/pods/send-events-ae8a4d15-6211-4e30-a795-ce146f4145f7,UID:1bab18e9-dd57-4372-ae77-bfe772d1861e,ResourceVersion:10985319,Generation:0,CreationTimestamp:2020-11-22 23:12:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 486535510,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-85rhl {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-85rhl,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-85rhl true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003f226e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003f22700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:12:12 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:12:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:12:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:12:12 +0000 UTC }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.227,StartTime:2020-11-22 23:12:12 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-11-22 23:12:15 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://5e282eb4a44fb8b12ab2d484bf37a57cadb5e16b01fbe9a03645342bb13a67d8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Nov 22 23:12:18.527: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Nov 22 23:12:20.532: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:12:20.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7362" for this suite. Nov 22 23:12:58.578: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:12:58.654: INFO: namespace events-7362 deletion completed in 38.108697298s • [SLOW TEST:46.228 seconds] [k8s.io] [sig-node] Events /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:12:58.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Nov 22 23:12:58.753: INFO: Waiting up to 5m0s for pod "downward-api-4affeb48-8dff-4a29-9086-fcd03a5378d0" in namespace "downward-api-140" to be "success or failure" Nov 22 23:12:58.788: INFO: Pod "downward-api-4affeb48-8dff-4a29-9086-fcd03a5378d0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.381286ms Nov 22 23:13:00.791: INFO: Pod "downward-api-4affeb48-8dff-4a29-9086-fcd03a5378d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037501114s Nov 22 23:13:02.795: INFO: Pod "downward-api-4affeb48-8dff-4a29-9086-fcd03a5378d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042084002s STEP: Saw pod success Nov 22 23:13:02.795: INFO: Pod "downward-api-4affeb48-8dff-4a29-9086-fcd03a5378d0" satisfied condition "success or failure" Nov 22 23:13:02.799: INFO: Trying to get logs from node iruya-worker pod downward-api-4affeb48-8dff-4a29-9086-fcd03a5378d0 container dapi-container: STEP: delete the pod Nov 22 23:13:02.844: INFO: Waiting for pod downward-api-4affeb48-8dff-4a29-9086-fcd03a5378d0 to disappear Nov 22 23:13:02.863: INFO: Pod downward-api-4affeb48-8dff-4a29-9086-fcd03a5378d0 no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:13:02.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-140" for this suite. Nov 22 23:13:08.908: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:13:09.043: INFO: namespace downward-api-140 deletion completed in 6.175627973s • [SLOW TEST:10.388 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:13:09.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Nov 22 23:13:09.134: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1104" to be "success or failure" Nov 22 23:13:09.144: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 9.937826ms Nov 22 23:13:11.253: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118486798s Nov 22 23:13:13.332: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.197266666s Nov 22 23:13:15.336: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.201908163s Nov 22 23:13:17.339: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.204865411s STEP: Saw pod success Nov 22 23:13:17.339: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Nov 22 23:13:17.341: INFO: Trying to get logs from node iruya-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod Nov 22 23:13:17.525: INFO: Waiting for pod pod-host-path-test to disappear Nov 22 23:13:17.569: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:13:17.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-1104" for this suite. Nov 22 23:13:27.636: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:13:27.700: INFO: namespace hostpath-1104 deletion completed in 10.127018317s • [SLOW TEST:18.657 seconds] [sig-storage] HostPath /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:13:27.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Nov 22 23:13:27.871: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-5393 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Nov 22 23:13:34.545: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI1122 23:13:34.473473 2496 log.go:172] (0xc0000fc790) (0xc0006b0a00) Create stream\nI1122 23:13:34.473529 2496 log.go:172] (0xc0000fc790) (0xc0006b0a00) Stream added, broadcasting: 1\nI1122 23:13:34.476875 2496 log.go:172] (0xc0000fc790) Reply frame received for 1\nI1122 23:13:34.476970 2496 log.go:172] (0xc0000fc790) (0xc000690140) Create stream\nI1122 23:13:34.476987 2496 log.go:172] (0xc0000fc790) (0xc000690140) Stream added, broadcasting: 3\nI1122 23:13:34.478084 2496 log.go:172] (0xc0000fc790) Reply frame received for 3\nI1122 23:13:34.478118 2496 log.go:172] (0xc0000fc790) (0xc0002b81e0) Create stream\nI1122 23:13:34.478135 2496 log.go:172] (0xc0000fc790) (0xc0002b81e0) Stream added, broadcasting: 5\nI1122 23:13:34.479016 2496 log.go:172] (0xc0000fc790) Reply frame received for 5\nI1122 23:13:34.479056 2496 log.go:172] (0xc0000fc790) (0xc0006b0aa0) Create stream\nI1122 23:13:34.479079 2496 log.go:172] (0xc0000fc790) (0xc0006b0aa0) Stream added, broadcasting: 7\nI1122 23:13:34.480103 2496 log.go:172] (0xc0000fc790) Reply frame received for 7\nI1122 23:13:34.480285 2496 log.go:172] (0xc000690140) (3) Writing data frame\nI1122 23:13:34.480439 2496 log.go:172] (0xc000690140) (3) Writing data frame\nI1122 23:13:34.482906 2496 log.go:172] (0xc0000fc790) Data frame received for 5\nI1122 23:13:34.482947 2496 log.go:172] (0xc0002b81e0) (5) Data frame handling\nI1122 23:13:34.483000 2496 log.go:172] (0xc0002b81e0) (5) Data frame sent\nI1122 23:13:34.483027 2496 log.go:172] (0xc0000fc790) Data frame received for 5\nI1122 23:13:34.483044 2496 log.go:172] (0xc0002b81e0) (5) Data frame handling\nI1122 23:13:34.483061 2496 log.go:172] (0xc0002b81e0) (5) Data frame sent\nI1122 23:13:34.519119 2496 log.go:172] (0xc0000fc790) Data frame received for 7\nI1122 23:13:34.519165 2496 log.go:172] (0xc0006b0aa0) (7) Data frame handling\nI1122 23:13:34.519213 2496 log.go:172] (0xc0000fc790) Data frame received for 5\nI1122 23:13:34.519254 2496 log.go:172] (0xc0002b81e0) (5) Data frame handling\nI1122 23:13:34.519664 2496 log.go:172] (0xc0000fc790) Data frame received for 1\nI1122 23:13:34.519698 2496 log.go:172] (0xc0006b0a00) (1) Data frame handling\nI1122 23:13:34.519737 2496 log.go:172] (0xc0006b0a00) (1) Data frame sent\nI1122 23:13:34.519923 2496 log.go:172] (0xc0000fc790) (0xc000690140) Stream removed, broadcasting: 3\nI1122 23:13:34.519968 2496 log.go:172] (0xc0000fc790) (0xc0006b0a00) Stream removed, broadcasting: 1\nI1122 23:13:34.519984 2496 log.go:172] (0xc0000fc790) Go away received\nI1122 23:13:34.520178 2496 log.go:172] (0xc0000fc790) (0xc0006b0a00) Stream removed, broadcasting: 1\nI1122 23:13:34.520216 2496 log.go:172] (0xc0000fc790) (0xc000690140) Stream removed, broadcasting: 3\nI1122 23:13:34.520233 2496 log.go:172] (0xc0000fc790) (0xc0002b81e0) Stream removed, broadcasting: 5\nI1122 23:13:34.520257 2496 log.go:172] (0xc0000fc790) (0xc0006b0aa0) Stream removed, broadcasting: 7\n" Nov 22 23:13:34.546: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:13:36.552: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5393" for this suite. Nov 22 23:13:42.569: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:13:42.641: INFO: namespace kubectl-5393 deletion completed in 6.084078692s • [SLOW TEST:14.940 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:13:42.641: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Nov 22 23:13:46.918: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:13:46.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6305" for this suite. Nov 22 23:13:52.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:13:53.062: INFO: namespace container-runtime-6305 deletion completed in 6.075738635s • [SLOW TEST:10.422 seconds] [k8s.io] Container Runtime /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:13:53.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:14:02.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8826" for this suite. Nov 22 23:14:10.305: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:14:10.383: INFO: namespace namespaces-8826 deletion completed in 8.110250539s STEP: Destroying namespace "nsdeletetest-3919" for this suite. Nov 22 23:14:10.385: INFO: Namespace nsdeletetest-3919 was already deleted STEP: Destroying namespace "nsdeletetest-4194" for this suite. Nov 22 23:14:16.396: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:14:16.471: INFO: namespace nsdeletetest-4194 deletion completed in 6.085896709s • [SLOW TEST:23.408 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:14:16.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Nov 22 23:14:16.539: INFO: Waiting up to 5m0s for pod "downward-api-606ed201-e853-4888-a321-b5d99a3e5a2a" in namespace "downward-api-4959" to be "success or failure" Nov 22 23:14:16.543: INFO: Pod "downward-api-606ed201-e853-4888-a321-b5d99a3e5a2a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.636103ms Nov 22 23:14:18.547: INFO: Pod "downward-api-606ed201-e853-4888-a321-b5d99a3e5a2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007734324s Nov 22 23:14:20.552: INFO: Pod "downward-api-606ed201-e853-4888-a321-b5d99a3e5a2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012349795s STEP: Saw pod success Nov 22 23:14:20.552: INFO: Pod "downward-api-606ed201-e853-4888-a321-b5d99a3e5a2a" satisfied condition "success or failure" Nov 22 23:14:20.555: INFO: Trying to get logs from node iruya-worker pod downward-api-606ed201-e853-4888-a321-b5d99a3e5a2a container dapi-container: STEP: delete the pod Nov 22 23:14:20.574: INFO: Waiting for pod downward-api-606ed201-e853-4888-a321-b5d99a3e5a2a to disappear Nov 22 23:14:20.584: INFO: Pod downward-api-606ed201-e853-4888-a321-b5d99a3e5a2a no longer exists [AfterEach] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:14:20.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4959" for this suite. Nov 22 23:14:28.600: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:14:28.666: INFO: namespace downward-api-4959 deletion completed in 8.079025924s • [SLOW TEST:12.196 seconds] [sig-node] Downward API /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:14:28.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Nov 22 23:14:34.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-562798f0-c5d5-49bb-87a2-6119919280c3 -c busybox-main-container --namespace=emptydir-6566 -- cat /usr/share/volumeshare/shareddata.txt' Nov 22 23:14:35.169: INFO: stderr: "I1122 23:14:35.097768 2521 log.go:172] (0xc000141080) (0xc000530b40) Create stream\nI1122 23:14:35.097831 2521 log.go:172] (0xc000141080) (0xc000530b40) Stream added, broadcasting: 1\nI1122 23:14:35.101803 2521 log.go:172] (0xc000141080) Reply frame received for 1\nI1122 23:14:35.101858 2521 log.go:172] (0xc000141080) (0xc0005301e0) Create stream\nI1122 23:14:35.101874 2521 log.go:172] (0xc000141080) (0xc0005301e0) Stream added, broadcasting: 3\nI1122 23:14:35.102843 2521 log.go:172] (0xc000141080) Reply frame received for 3\nI1122 23:14:35.102891 2521 log.go:172] (0xc000141080) (0xc0006ea000) Create stream\nI1122 23:14:35.102912 2521 log.go:172] (0xc000141080) (0xc0006ea000) Stream added, broadcasting: 5\nI1122 23:14:35.103773 2521 log.go:172] (0xc000141080) Reply frame received for 5\nI1122 23:14:35.162188 2521 log.go:172] (0xc000141080) Data frame received for 5\nI1122 23:14:35.162229 2521 log.go:172] (0xc0006ea000) (5) Data frame handling\nI1122 23:14:35.162257 2521 log.go:172] (0xc000141080) Data frame received for 3\nI1122 23:14:35.162272 2521 log.go:172] (0xc0005301e0) (3) Data frame handling\nI1122 23:14:35.162297 2521 log.go:172] (0xc0005301e0) (3) Data frame sent\nI1122 23:14:35.162306 2521 log.go:172] (0xc000141080) Data frame received for 3\nI1122 23:14:35.162314 2521 log.go:172] (0xc0005301e0) (3) Data frame handling\nI1122 23:14:35.163920 2521 log.go:172] (0xc000141080) Data frame received for 1\nI1122 23:14:35.163969 2521 log.go:172] (0xc000530b40) (1) Data frame handling\nI1122 23:14:35.164047 2521 log.go:172] (0xc000530b40) (1) Data frame sent\nI1122 23:14:35.164103 2521 log.go:172] (0xc000141080) (0xc000530b40) Stream removed, broadcasting: 1\nI1122 23:14:35.164182 2521 log.go:172] (0xc000141080) Go away received\nI1122 23:14:35.164582 2521 log.go:172] (0xc000141080) (0xc000530b40) Stream removed, broadcasting: 1\nI1122 23:14:35.164610 2521 log.go:172] (0xc000141080) (0xc0005301e0) Stream removed, broadcasting: 3\nI1122 23:14:35.164622 2521 log.go:172] (0xc000141080) (0xc0006ea000) Stream removed, broadcasting: 5\n" Nov 22 23:14:35.169: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:14:35.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6566" for this suite. Nov 22 23:14:41.198: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:14:41.270: INFO: namespace emptydir-6566 deletion completed in 6.088059814s • [SLOW TEST:12.603 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:14:41.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 22 23:14:41.328: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b7408784-468d-44c6-9b81-d750cea6a63f" in namespace "downward-api-8592" to be "success or failure" Nov 22 23:14:41.345: INFO: Pod "downwardapi-volume-b7408784-468d-44c6-9b81-d750cea6a63f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.088237ms Nov 22 23:14:43.379: INFO: Pod "downwardapi-volume-b7408784-468d-44c6-9b81-d750cea6a63f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050124356s Nov 22 23:14:45.382: INFO: Pod "downwardapi-volume-b7408784-468d-44c6-9b81-d750cea6a63f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053908101s STEP: Saw pod success Nov 22 23:14:45.382: INFO: Pod "downwardapi-volume-b7408784-468d-44c6-9b81-d750cea6a63f" satisfied condition "success or failure" Nov 22 23:14:45.385: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b7408784-468d-44c6-9b81-d750cea6a63f container client-container: STEP: delete the pod Nov 22 23:14:45.424: INFO: Waiting for pod downwardapi-volume-b7408784-468d-44c6-9b81-d750cea6a63f to disappear Nov 22 23:14:45.428: INFO: Pod downwardapi-volume-b7408784-468d-44c6-9b81-d750cea6a63f no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:14:45.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8592" for this suite. Nov 22 23:14:51.593: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:14:51.662: INFO: namespace downward-api-8592 deletion completed in 6.23170054s • [SLOW TEST:10.392 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:14:51.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Nov 22 23:14:59.729: INFO: 7 pods remaining Nov 22 23:14:59.729: INFO: 0 pods has nil DeletionTimestamp Nov 22 23:14:59.729: INFO: Nov 22 23:15:00.898: INFO: 0 pods remaining Nov 22 23:15:00.898: INFO: 0 pods has nil DeletionTimestamp Nov 22 23:15:00.898: INFO: STEP: Gathering metrics W1122 23:15:01.746434 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 22 23:15:01.746: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:15:01.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4348" for this suite. Nov 22 23:15:09.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:15:09.898: INFO: namespace gc-4348 deletion completed in 8.148776011s • [SLOW TEST:18.235 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:15:09.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Nov 22 23:15:10.004: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3017,SelfLink:/api/v1/namespaces/watch-3017/configmaps/e2e-watch-test-resource-version,UID:fe9c7dd3-1746-4ef8-99dc-809429aa4c79,ResourceVersion:10986047,Generation:0,CreationTimestamp:2020-11-22 23:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Nov 22 23:15:10.004: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3017,SelfLink:/api/v1/namespaces/watch-3017/configmaps/e2e-watch-test-resource-version,UID:fe9c7dd3-1746-4ef8-99dc-809429aa4c79,ResourceVersion:10986048,Generation:0,CreationTimestamp:2020-11-22 23:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:15:10.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3017" for this suite. Nov 22 23:15:16.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:15:16.096: INFO: namespace watch-3017 deletion completed in 6.088242414s • [SLOW TEST:6.198 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:15:16.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Nov 22 23:15:16.181: INFO: Waiting up to 5m0s for pod "pod-28d704b4-0229-4b67-9036-b3b3f32944af" in namespace "emptydir-8912" to be "success or failure" Nov 22 23:15:16.184: INFO: Pod "pod-28d704b4-0229-4b67-9036-b3b3f32944af": Phase="Pending", Reason="", readiness=false. Elapsed: 3.409779ms Nov 22 23:15:18.253: INFO: Pod "pod-28d704b4-0229-4b67-9036-b3b3f32944af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072247798s Nov 22 23:15:20.265: INFO: Pod "pod-28d704b4-0229-4b67-9036-b3b3f32944af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.083922885s Nov 22 23:15:22.269: INFO: Pod "pod-28d704b4-0229-4b67-9036-b3b3f32944af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.088054767s STEP: Saw pod success Nov 22 23:15:22.269: INFO: Pod "pod-28d704b4-0229-4b67-9036-b3b3f32944af" satisfied condition "success or failure" Nov 22 23:15:22.273: INFO: Trying to get logs from node iruya-worker2 pod pod-28d704b4-0229-4b67-9036-b3b3f32944af container test-container: STEP: delete the pod Nov 22 23:15:22.314: INFO: Waiting for pod pod-28d704b4-0229-4b67-9036-b3b3f32944af to disappear Nov 22 23:15:22.322: INFO: Pod pod-28d704b4-0229-4b67-9036-b3b3f32944af no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:15:22.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8912" for this suite. Nov 22 23:15:28.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:15:28.417: INFO: namespace emptydir-8912 deletion completed in 6.091460665s • [SLOW TEST:12.320 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:15:28.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Nov 22 23:15:28.493: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Nov 22 23:15:28.523: INFO: Waiting for terminating namespaces to be deleted... Nov 22 23:15:28.526: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Nov 22 23:15:28.531: INFO: kube-proxy-mtljr from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 22 23:15:28.531: INFO: Container kube-proxy ready: true, restart count 0 Nov 22 23:15:28.531: INFO: kindnet-7bsvw from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 22 23:15:28.531: INFO: Container kindnet-cni ready: true, restart count 0 Nov 22 23:15:28.531: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Nov 22 23:15:28.535: INFO: kindnet-djqgh from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 22 23:15:28.535: INFO: Container kindnet-cni ready: true, restart count 0 Nov 22 23:15:28.535: INFO: kube-proxy-52wt5 from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded) Nov 22 23:15:28.535: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-24f8d1e2-9169-4195-a523-8c523749839a 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-24f8d1e2-9169-4195-a523-8c523749839a off the node iruya-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-24f8d1e2-9169-4195-a523-8c523749839a [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:15:38.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9031" for this suite. Nov 22 23:15:56.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:15:56.783: INFO: namespace sched-pred-9031 deletion completed in 18.10861786s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:28.366 seconds] [sig-scheduling] SchedulerPredicates [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:15:56.785: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Nov 22 23:15:57.526: INFO: Pod name wrapped-volume-race-634bbb53-7bbc-4443-8214-5477c887b37f: Found 0 pods out of 5 Nov 22 23:16:02.533: INFO: Pod name wrapped-volume-race-634bbb53-7bbc-4443-8214-5477c887b37f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-634bbb53-7bbc-4443-8214-5477c887b37f in namespace emptydir-wrapper-213, will wait for the garbage collector to delete the pods Nov 22 23:16:18.669: INFO: Deleting ReplicationController wrapped-volume-race-634bbb53-7bbc-4443-8214-5477c887b37f took: 6.349338ms Nov 22 23:16:18.969: INFO: Terminating ReplicationController wrapped-volume-race-634bbb53-7bbc-4443-8214-5477c887b37f pods took: 300.28325ms STEP: Creating RC which spawns configmap-volume pods Nov 22 23:17:05.612: INFO: Pod name wrapped-volume-race-ab37c805-4851-4150-a2de-1b4e704f9017: Found 0 pods out of 5 Nov 22 23:17:10.627: INFO: Pod name wrapped-volume-race-ab37c805-4851-4150-a2de-1b4e704f9017: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-ab37c805-4851-4150-a2de-1b4e704f9017 in namespace emptydir-wrapper-213, will wait for the garbage collector to delete the pods Nov 22 23:17:24.705: INFO: Deleting ReplicationController wrapped-volume-race-ab37c805-4851-4150-a2de-1b4e704f9017 took: 6.619079ms Nov 22 23:17:25.006: INFO: Terminating ReplicationController wrapped-volume-race-ab37c805-4851-4150-a2de-1b4e704f9017 pods took: 300.33018ms STEP: Creating RC which spawns configmap-volume pods Nov 22 23:18:05.434: INFO: Pod name wrapped-volume-race-0389afac-e3c2-4496-a6ac-711635781bf7: Found 0 pods out of 5 Nov 22 23:18:10.442: INFO: Pod name wrapped-volume-race-0389afac-e3c2-4496-a6ac-711635781bf7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0389afac-e3c2-4496-a6ac-711635781bf7 in namespace emptydir-wrapper-213, will wait for the garbage collector to delete the pods Nov 22 23:18:24.568: INFO: Deleting ReplicationController wrapped-volume-race-0389afac-e3c2-4496-a6ac-711635781bf7 took: 7.073782ms Nov 22 23:18:24.868: INFO: Terminating ReplicationController wrapped-volume-race-0389afac-e3c2-4496-a6ac-711635781bf7 pods took: 300.342914ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:19:06.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-213" for this suite. Nov 22 23:19:14.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:19:14.195: INFO: namespace emptydir-wrapper-213 deletion completed in 8.097378281s • [SLOW TEST:197.410 seconds] [sig-storage] EmptyDir wrapper volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:19:14.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W1122 23:19:55.192982 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 22 23:19:55.193: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:19:55.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1778" for this suite. Nov 22 23:20:05.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:20:05.340: INFO: namespace gc-1778 deletion completed in 10.144231087s • [SLOW TEST:51.145 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:20:05.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-611592d1-05e4-43c3-b9fc-44d2ea1976e2 Nov 22 23:20:05.553: INFO: Pod name my-hostname-basic-611592d1-05e4-43c3-b9fc-44d2ea1976e2: Found 0 pods out of 1 Nov 22 23:20:10.557: INFO: Pod name my-hostname-basic-611592d1-05e4-43c3-b9fc-44d2ea1976e2: Found 1 pods out of 1 Nov 22 23:20:10.557: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-611592d1-05e4-43c3-b9fc-44d2ea1976e2" are running Nov 22 23:20:10.560: INFO: Pod "my-hostname-basic-611592d1-05e4-43c3-b9fc-44d2ea1976e2-rh584" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-22 23:20:05 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-22 23:20:09 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-22 23:20:09 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-11-22 23:20:05 +0000 UTC Reason: Message:}]) Nov 22 23:20:10.560: INFO: Trying to dial the pod Nov 22 23:20:15.571: INFO: Controller my-hostname-basic-611592d1-05e4-43c3-b9fc-44d2ea1976e2: Got expected result from replica 1 [my-hostname-basic-611592d1-05e4-43c3-b9fc-44d2ea1976e2-rh584]: "my-hostname-basic-611592d1-05e4-43c3-b9fc-44d2ea1976e2-rh584", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:20:15.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3195" for this suite. Nov 22 23:20:21.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:20:21.666: INFO: namespace replication-controller-3195 deletion completed in 6.091416426s • [SLOW TEST:16.326 seconds] [sig-apps] ReplicationController /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:20:21.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Nov 22 23:20:25.748: INFO: Pod pod-hostip-bc30609a-e107-43d5-80e3-78d65fccd0c5 has hostIP: 172.18.0.5 [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:20:25.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2450" for this suite. Nov 22 23:20:47.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:20:47.831: INFO: namespace pods-2450 deletion completed in 22.078701401s • [SLOW TEST:26.165 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:20:47.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Nov 22 23:20:47.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-291' Nov 22 23:20:50.580: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Nov 22 23:20:50.580: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Nov 22 23:20:50.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-291' Nov 22 23:20:50.715: INFO: stderr: "" Nov 22 23:20:50.715: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:20:50.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-291" for this suite. Nov 22 23:20:56.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:20:56.803: INFO: namespace kubectl-291 deletion completed in 6.085586558s • [SLOW TEST:8.972 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:20:56.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-cb14cd7a-c023-4c61-9ac4-723f84992a2b STEP: Creating a pod to test consume secrets Nov 22 23:20:56.916: INFO: Waiting up to 5m0s for pod "pod-secrets-ad440685-b12a-4911-9eac-b842a8409996" in namespace "secrets-1979" to be "success or failure" Nov 22 23:20:56.932: INFO: Pod "pod-secrets-ad440685-b12a-4911-9eac-b842a8409996": Phase="Pending", Reason="", readiness=false. Elapsed: 15.854646ms Nov 22 23:20:58.957: INFO: Pod "pod-secrets-ad440685-b12a-4911-9eac-b842a8409996": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041005275s Nov 22 23:21:00.965: INFO: Pod "pod-secrets-ad440685-b12a-4911-9eac-b842a8409996": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049340272s STEP: Saw pod success Nov 22 23:21:00.965: INFO: Pod "pod-secrets-ad440685-b12a-4911-9eac-b842a8409996" satisfied condition "success or failure" Nov 22 23:21:00.968: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-ad440685-b12a-4911-9eac-b842a8409996 container secret-volume-test: STEP: delete the pod Nov 22 23:21:01.000: INFO: Waiting for pod pod-secrets-ad440685-b12a-4911-9eac-b842a8409996 to disappear Nov 22 23:21:01.041: INFO: Pod pod-secrets-ad440685-b12a-4911-9eac-b842a8409996 no longer exists [AfterEach] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:21:01.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1979" for this suite. Nov 22 23:21:07.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:21:07.182: INFO: namespace secrets-1979 deletion completed in 6.137561693s • [SLOW TEST:10.378 seconds] [sig-storage] Secrets /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:21:07.183: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Nov 22 23:21:07.221: INFO: Waiting up to 5m0s for pod "downwardapi-volume-47c8aab5-7d95-4d14-b679-ea88696da736" in namespace "downward-api-7833" to be "success or failure" Nov 22 23:21:07.244: INFO: Pod "downwardapi-volume-47c8aab5-7d95-4d14-b679-ea88696da736": Phase="Pending", Reason="", readiness=false. Elapsed: 22.267871ms Nov 22 23:21:09.298: INFO: Pod "downwardapi-volume-47c8aab5-7d95-4d14-b679-ea88696da736": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076805994s Nov 22 23:21:11.302: INFO: Pod "downwardapi-volume-47c8aab5-7d95-4d14-b679-ea88696da736": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.080937357s STEP: Saw pod success Nov 22 23:21:11.302: INFO: Pod "downwardapi-volume-47c8aab5-7d95-4d14-b679-ea88696da736" satisfied condition "success or failure" Nov 22 23:21:11.305: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-47c8aab5-7d95-4d14-b679-ea88696da736 container client-container: STEP: delete the pod Nov 22 23:21:11.324: INFO: Waiting for pod downwardapi-volume-47c8aab5-7d95-4d14-b679-ea88696da736 to disappear Nov 22 23:21:11.382: INFO: Pod downwardapi-volume-47c8aab5-7d95-4d14-b679-ea88696da736 no longer exists [AfterEach] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:21:11.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7833" for this suite. Nov 22 23:21:17.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:21:17.600: INFO: namespace downward-api-7833 deletion completed in 6.213596163s • [SLOW TEST:10.417 seconds] [sig-storage] Downward API volume /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:21:17.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Nov 22 23:21:17.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-405' Nov 22 23:21:17.957: INFO: stderr: "" Nov 22 23:21:17.957: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 22 23:21:17.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-405' Nov 22 23:21:18.109: INFO: stderr: "" Nov 22 23:21:18.109: INFO: stdout: "update-demo-nautilus-28fww update-demo-nautilus-p29j5 " Nov 22 23:21:18.109: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-28fww -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-405' Nov 22 23:21:18.204: INFO: stderr: "" Nov 22 23:21:18.204: INFO: stdout: "" Nov 22 23:21:18.204: INFO: update-demo-nautilus-28fww is created but not running Nov 22 23:21:23.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-405' Nov 22 23:21:23.305: INFO: stderr: "" Nov 22 23:21:23.305: INFO: stdout: "update-demo-nautilus-28fww update-demo-nautilus-p29j5 " Nov 22 23:21:23.305: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-28fww -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-405' Nov 22 23:21:23.389: INFO: stderr: "" Nov 22 23:21:23.389: INFO: stdout: "true" Nov 22 23:21:23.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-28fww -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-405' Nov 22 23:21:23.484: INFO: stderr: "" Nov 22 23:21:23.484: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 22 23:21:23.484: INFO: validating pod update-demo-nautilus-28fww Nov 22 23:21:23.489: INFO: got data: { "image": "nautilus.jpg" } Nov 22 23:21:23.489: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 22 23:21:23.489: INFO: update-demo-nautilus-28fww is verified up and running Nov 22 23:21:23.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p29j5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-405' Nov 22 23:21:23.569: INFO: stderr: "" Nov 22 23:21:23.569: INFO: stdout: "true" Nov 22 23:21:23.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p29j5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-405' Nov 22 23:21:23.657: INFO: stderr: "" Nov 22 23:21:23.657: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 22 23:21:23.657: INFO: validating pod update-demo-nautilus-p29j5 Nov 22 23:21:23.660: INFO: got data: { "image": "nautilus.jpg" } Nov 22 23:21:23.660: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 22 23:21:23.660: INFO: update-demo-nautilus-p29j5 is verified up and running STEP: scaling down the replication controller Nov 22 23:21:23.662: INFO: scanned /root for discovery docs: Nov 22 23:21:23.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-405' Nov 22 23:21:24.784: INFO: stderr: "" Nov 22 23:21:24.784: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 22 23:21:24.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-405' Nov 22 23:21:24.877: INFO: stderr: "" Nov 22 23:21:24.877: INFO: stdout: "update-demo-nautilus-28fww update-demo-nautilus-p29j5 " STEP: Replicas for name=update-demo: expected=1 actual=2 Nov 22 23:21:29.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-405' Nov 22 23:21:29.974: INFO: stderr: "" Nov 22 23:21:29.974: INFO: stdout: "update-demo-nautilus-28fww update-demo-nautilus-p29j5 " STEP: Replicas for name=update-demo: expected=1 actual=2 Nov 22 23:21:34.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-405' Nov 22 23:21:35.119: INFO: stderr: "" Nov 22 23:21:35.119: INFO: stdout: "update-demo-nautilus-28fww update-demo-nautilus-p29j5 " STEP: Replicas for name=update-demo: expected=1 actual=2 Nov 22 23:21:40.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-405' Nov 22 23:21:40.224: INFO: stderr: "" Nov 22 23:21:40.224: INFO: stdout: "update-demo-nautilus-p29j5 " Nov 22 23:21:40.224: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p29j5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-405' Nov 22 23:21:40.306: INFO: stderr: "" Nov 22 23:21:40.306: INFO: stdout: "true" Nov 22 23:21:40.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p29j5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-405' Nov 22 23:21:40.398: INFO: stderr: "" Nov 22 23:21:40.398: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 22 23:21:40.398: INFO: validating pod update-demo-nautilus-p29j5 Nov 22 23:21:40.401: INFO: got data: { "image": "nautilus.jpg" } Nov 22 23:21:40.401: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 22 23:21:40.401: INFO: update-demo-nautilus-p29j5 is verified up and running STEP: scaling up the replication controller Nov 22 23:21:40.403: INFO: scanned /root for discovery docs: Nov 22 23:21:40.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-405' Nov 22 23:21:41.528: INFO: stderr: "" Nov 22 23:21:41.528: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Nov 22 23:21:41.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-405' Nov 22 23:21:41.630: INFO: stderr: "" Nov 22 23:21:41.630: INFO: stdout: "update-demo-nautilus-8vw64 update-demo-nautilus-p29j5 " Nov 22 23:21:41.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vw64 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-405' Nov 22 23:21:41.731: INFO: stderr: "" Nov 22 23:21:41.731: INFO: stdout: "" Nov 22 23:21:41.731: INFO: update-demo-nautilus-8vw64 is created but not running Nov 22 23:21:46.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-405' Nov 22 23:21:46.838: INFO: stderr: "" Nov 22 23:21:46.838: INFO: stdout: "update-demo-nautilus-8vw64 update-demo-nautilus-p29j5 " Nov 22 23:21:46.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vw64 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-405' Nov 22 23:21:46.935: INFO: stderr: "" Nov 22 23:21:46.935: INFO: stdout: "true" Nov 22 23:21:46.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8vw64 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-405' Nov 22 23:21:47.027: INFO: stderr: "" Nov 22 23:21:47.027: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 22 23:21:47.027: INFO: validating pod update-demo-nautilus-8vw64 Nov 22 23:21:47.031: INFO: got data: { "image": "nautilus.jpg" } Nov 22 23:21:47.031: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 22 23:21:47.031: INFO: update-demo-nautilus-8vw64 is verified up and running Nov 22 23:21:47.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p29j5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-405' Nov 22 23:21:47.124: INFO: stderr: "" Nov 22 23:21:47.124: INFO: stdout: "true" Nov 22 23:21:47.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-p29j5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-405' Nov 22 23:21:47.217: INFO: stderr: "" Nov 22 23:21:47.217: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Nov 22 23:21:47.217: INFO: validating pod update-demo-nautilus-p29j5 Nov 22 23:21:47.221: INFO: got data: { "image": "nautilus.jpg" } Nov 22 23:21:47.221: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Nov 22 23:21:47.221: INFO: update-demo-nautilus-p29j5 is verified up and running STEP: using delete to clean up resources Nov 22 23:21:47.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-405' Nov 22 23:21:47.335: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Nov 22 23:21:47.335: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Nov 22 23:21:47.335: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-405' Nov 22 23:21:48.321: INFO: stderr: "No resources found.\n" Nov 22 23:21:48.321: INFO: stdout: "" Nov 22 23:21:48.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-405 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Nov 22 23:21:48.420: INFO: stderr: "" Nov 22 23:21:48.420: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:21:48.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-405" for this suite. Nov 22 23:22:10.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:22:10.793: INFO: namespace kubectl-405 deletion completed in 22.370140407s • [SLOW TEST:53.193 seconds] [sig-cli] Kubectl client /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:22:10.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Nov 22 23:22:15.393: INFO: Successfully updated pod "annotationupdatebe5bb779-23fa-4db2-aca4-d9d506f46ef3" [AfterEach] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:22:19.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4225" for this suite. Nov 22 23:22:41.475: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:22:41.550: INFO: namespace projected-4225 deletion completed in 22.113059735s • [SLOW TEST:30.757 seconds] [sig-storage] Projected downwardAPI /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:22:41.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-cc3e45e5-826d-4bed-84e0-0f09a44e14f5 STEP: Creating a pod to test consume configMaps Nov 22 23:22:41.655: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-45bb394e-1e63-47ce-88ab-9bfe0e6c4702" in namespace "projected-647" to be "success or failure" Nov 22 23:22:41.696: INFO: Pod "pod-projected-configmaps-45bb394e-1e63-47ce-88ab-9bfe0e6c4702": Phase="Pending", Reason="", readiness=false. Elapsed: 41.626591ms Nov 22 23:22:43.701: INFO: Pod "pod-projected-configmaps-45bb394e-1e63-47ce-88ab-9bfe0e6c4702": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046209012s Nov 22 23:22:45.704: INFO: Pod "pod-projected-configmaps-45bb394e-1e63-47ce-88ab-9bfe0e6c4702": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049421594s STEP: Saw pod success Nov 22 23:22:45.704: INFO: Pod "pod-projected-configmaps-45bb394e-1e63-47ce-88ab-9bfe0e6c4702" satisfied condition "success or failure" Nov 22 23:22:45.707: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-45bb394e-1e63-47ce-88ab-9bfe0e6c4702 container projected-configmap-volume-test: STEP: delete the pod Nov 22 23:22:45.803: INFO: Waiting for pod pod-projected-configmaps-45bb394e-1e63-47ce-88ab-9bfe0e6c4702 to disappear Nov 22 23:22:45.813: INFO: Pod pod-projected-configmaps-45bb394e-1e63-47ce-88ab-9bfe0e6c4702 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:22:45.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-647" for this suite. Nov 22 23:22:52.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:22:52.169: INFO: namespace projected-647 deletion completed in 6.352231152s • [SLOW TEST:10.617 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:22:52.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-1fd31d7d-76cd-452d-a900-dbcbbaa3f3fe STEP: Creating a pod to test consume configMaps Nov 22 23:22:52.251: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-43d9e47d-8cfa-4a75-92bc-5aebeefa8640" in namespace "projected-9252" to be "success or failure" Nov 22 23:22:52.268: INFO: Pod "pod-projected-configmaps-43d9e47d-8cfa-4a75-92bc-5aebeefa8640": Phase="Pending", Reason="", readiness=false. Elapsed: 16.803743ms Nov 22 23:22:54.272: INFO: Pod "pod-projected-configmaps-43d9e47d-8cfa-4a75-92bc-5aebeefa8640": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020743535s Nov 22 23:22:56.276: INFO: Pod "pod-projected-configmaps-43d9e47d-8cfa-4a75-92bc-5aebeefa8640": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024948733s STEP: Saw pod success Nov 22 23:22:56.276: INFO: Pod "pod-projected-configmaps-43d9e47d-8cfa-4a75-92bc-5aebeefa8640" satisfied condition "success or failure" Nov 22 23:22:56.279: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-43d9e47d-8cfa-4a75-92bc-5aebeefa8640 container projected-configmap-volume-test: STEP: delete the pod Nov 22 23:22:56.299: INFO: Waiting for pod pod-projected-configmaps-43d9e47d-8cfa-4a75-92bc-5aebeefa8640 to disappear Nov 22 23:22:56.304: INFO: Pod pod-projected-configmaps-43d9e47d-8cfa-4a75-92bc-5aebeefa8640 no longer exists [AfterEach] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:22:56.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9252" for this suite. Nov 22 23:23:02.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:23:02.398: INFO: namespace projected-9252 deletion completed in 6.090302798s • [SLOW TEST:10.229 seconds] [sig-storage] Projected configMap /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:23:02.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W1122 23:23:03.503812 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Nov 22 23:23:03.503: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:23:03.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7088" for this suite. Nov 22 23:23:09.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:23:09.628: INFO: namespace gc-7088 deletion completed in 6.122212436s • [SLOW TEST:7.230 seconds] [sig-api-machinery] Garbage collector /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:23:09.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 22 23:23:09.696: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:23:13.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5753" for this suite. Nov 22 23:23:59.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:23:59.863: INFO: namespace pods-5753 deletion completed in 46.090822242s • [SLOW TEST:50.234 seconds] [k8s.io] Pods /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:23:59.863: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Nov 22 23:23:59.975: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4619,SelfLink:/api/v1/namespaces/watch-4619/configmaps/e2e-watch-test-watch-closed,UID:92757df1-9462-43ff-9735-e57dc7a7ba87,ResourceVersion:10988518,Generation:0,CreationTimestamp:2020-11-22 23:23:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Nov 22 23:23:59.975: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4619,SelfLink:/api/v1/namespaces/watch-4619/configmaps/e2e-watch-test-watch-closed,UID:92757df1-9462-43ff-9735-e57dc7a7ba87,ResourceVersion:10988519,Generation:0,CreationTimestamp:2020-11-22 23:23:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Nov 22 23:23:59.986: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4619,SelfLink:/api/v1/namespaces/watch-4619/configmaps/e2e-watch-test-watch-closed,UID:92757df1-9462-43ff-9735-e57dc7a7ba87,ResourceVersion:10988520,Generation:0,CreationTimestamp:2020-11-22 23:23:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Nov 22 23:23:59.986: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4619,SelfLink:/api/v1/namespaces/watch-4619/configmaps/e2e-watch-test-watch-closed,UID:92757df1-9462-43ff-9735-e57dc7a7ba87,ResourceVersion:10988521,Generation:0,CreationTimestamp:2020-11-22 23:23:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:23:59.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4619" for this suite. Nov 22 23:24:06.005: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:24:06.110: INFO: namespace watch-4619 deletion completed in 6.119633972s • [SLOW TEST:6.247 seconds] [sig-api-machinery] Watchers /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:24:06.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:24:32.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8247" for this suite. Nov 22 23:24:38.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:24:38.436: INFO: namespace namespaces-8247 deletion completed in 6.132690262s STEP: Destroying namespace "nsdeletetest-6359" for this suite. Nov 22 23:24:38.438: INFO: Namespace nsdeletetest-6359 was already deleted STEP: Destroying namespace "nsdeletetest-2766" for this suite. Nov 22 23:24:44.460: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:24:44.538: INFO: namespace nsdeletetest-2766 deletion completed in 6.100336935s • [SLOW TEST:38.428 seconds] [sig-api-machinery] Namespaces [Serial] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:24:44.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Nov 22 23:24:44.577: INFO: Waiting up to 5m0s for pod "pod-f765958a-1e57-4a4c-8b1e-62aa7dabd5af" in namespace "emptydir-9886" to be "success or failure" Nov 22 23:24:44.641: INFO: Pod "pod-f765958a-1e57-4a4c-8b1e-62aa7dabd5af": Phase="Pending", Reason="", readiness=false. Elapsed: 64.370996ms Nov 22 23:24:46.646: INFO: Pod "pod-f765958a-1e57-4a4c-8b1e-62aa7dabd5af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068575133s Nov 22 23:24:48.650: INFO: Pod "pod-f765958a-1e57-4a4c-8b1e-62aa7dabd5af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.072473579s STEP: Saw pod success Nov 22 23:24:48.650: INFO: Pod "pod-f765958a-1e57-4a4c-8b1e-62aa7dabd5af" satisfied condition "success or failure" Nov 22 23:24:48.652: INFO: Trying to get logs from node iruya-worker2 pod pod-f765958a-1e57-4a4c-8b1e-62aa7dabd5af container test-container: STEP: delete the pod Nov 22 23:24:48.666: INFO: Waiting for pod pod-f765958a-1e57-4a4c-8b1e-62aa7dabd5af to disappear Nov 22 23:24:48.670: INFO: Pod pod-f765958a-1e57-4a4c-8b1e-62aa7dabd5af no longer exists [AfterEach] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Nov 22 23:24:48.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9886" for this suite. Nov 22 23:24:54.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Nov 22 23:24:54.761: INFO: namespace emptydir-9886 deletion completed in 6.087706617s • [SLOW TEST:10.222 seconds] [sig-storage] EmptyDir volumes /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Nov 22 23:24:54.761: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Nov 22 23:24:54.865: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Nov 22 23:25:05.120: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Nov 22 23:25:20.218: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:25:20.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8574" for this suite.
Nov 22 23:25:26.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:25:26.362: INFO: namespace pods-8574 deletion completed in 6.13365983s

• [SLOW TEST:25.338 seconds]
[k8s.io] [sig-node] Pods Extended
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:25:26.362: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Nov 22 23:25:30.967: INFO: Successfully updated pod "labelsupdate841ffc51-84c5-4edd-bb1d-5349b2363b82"
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:25:35.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1932" for this suite.
Nov 22 23:25:57.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:25:57.112: INFO: namespace downward-api-1932 deletion completed in 22.106309297s

• [SLOW TEST:30.750 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:25:57.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-2ea74b5b-502e-48c4-80f5-10c48b4430cc
STEP: Creating a pod to test consume configMaps
Nov 22 23:25:57.205: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-50fe3b31-4c2d-4530-afbf-182935a7f75c" in namespace "projected-735" to be "success or failure"
Nov 22 23:25:57.230: INFO: Pod "pod-projected-configmaps-50fe3b31-4c2d-4530-afbf-182935a7f75c": Phase="Pending", Reason="", readiness=false. Elapsed: 25.266736ms
Nov 22 23:25:59.283: INFO: Pod "pod-projected-configmaps-50fe3b31-4c2d-4530-afbf-182935a7f75c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078069341s
Nov 22 23:26:01.292: INFO: Pod "pod-projected-configmaps-50fe3b31-4c2d-4530-afbf-182935a7f75c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086981406s
STEP: Saw pod success
Nov 22 23:26:01.292: INFO: Pod "pod-projected-configmaps-50fe3b31-4c2d-4530-afbf-182935a7f75c" satisfied condition "success or failure"
Nov 22 23:26:01.294: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-50fe3b31-4c2d-4530-afbf-182935a7f75c container projected-configmap-volume-test: 
STEP: delete the pod
Nov 22 23:26:01.362: INFO: Waiting for pod pod-projected-configmaps-50fe3b31-4c2d-4530-afbf-182935a7f75c to disappear
Nov 22 23:26:01.381: INFO: Pod pod-projected-configmaps-50fe3b31-4c2d-4530-afbf-182935a7f75c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:26:01.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-735" for this suite.
Nov 22 23:26:07.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:26:07.470: INFO: namespace projected-735 deletion completed in 6.085642212s

• [SLOW TEST:10.357 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:26:07.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-h9qv
STEP: Creating a pod to test atomic-volume-subpath
Nov 22 23:26:07.597: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-h9qv" in namespace "subpath-7116" to be "success or failure"
Nov 22 23:26:07.601: INFO: Pod "pod-subpath-test-configmap-h9qv": Phase="Pending", Reason="", readiness=false. Elapsed: 3.279109ms
Nov 22 23:26:09.630: INFO: Pod "pod-subpath-test-configmap-h9qv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032712543s
Nov 22 23:26:11.648: INFO: Pod "pod-subpath-test-configmap-h9qv": Phase="Running", Reason="", readiness=true. Elapsed: 4.050730238s
Nov 22 23:26:13.652: INFO: Pod "pod-subpath-test-configmap-h9qv": Phase="Running", Reason="", readiness=true. Elapsed: 6.054574337s
Nov 22 23:26:15.657: INFO: Pod "pod-subpath-test-configmap-h9qv": Phase="Running", Reason="", readiness=true. Elapsed: 8.059314354s
Nov 22 23:26:17.666: INFO: Pod "pod-subpath-test-configmap-h9qv": Phase="Running", Reason="", readiness=true. Elapsed: 10.068747631s
Nov 22 23:26:19.670: INFO: Pod "pod-subpath-test-configmap-h9qv": Phase="Running", Reason="", readiness=true. Elapsed: 12.072208908s
Nov 22 23:26:21.674: INFO: Pod "pod-subpath-test-configmap-h9qv": Phase="Running", Reason="", readiness=true. Elapsed: 14.076747921s
Nov 22 23:26:23.678: INFO: Pod "pod-subpath-test-configmap-h9qv": Phase="Running", Reason="", readiness=true. Elapsed: 16.081114473s
Nov 22 23:26:25.682: INFO: Pod "pod-subpath-test-configmap-h9qv": Phase="Running", Reason="", readiness=true. Elapsed: 18.084990473s
Nov 22 23:26:27.686: INFO: Pod "pod-subpath-test-configmap-h9qv": Phase="Running", Reason="", readiness=true. Elapsed: 20.08882158s
Nov 22 23:26:29.702: INFO: Pod "pod-subpath-test-configmap-h9qv": Phase="Running", Reason="", readiness=true. Elapsed: 22.104622597s
Nov 22 23:26:31.706: INFO: Pod "pod-subpath-test-configmap-h9qv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.108424978s
STEP: Saw pod success
Nov 22 23:26:31.706: INFO: Pod "pod-subpath-test-configmap-h9qv" satisfied condition "success or failure"
Nov 22 23:26:31.709: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-h9qv container test-container-subpath-configmap-h9qv: 
STEP: delete the pod
Nov 22 23:26:31.750: INFO: Waiting for pod pod-subpath-test-configmap-h9qv to disappear
Nov 22 23:26:31.752: INFO: Pod pod-subpath-test-configmap-h9qv no longer exists
STEP: Deleting pod pod-subpath-test-configmap-h9qv
Nov 22 23:26:31.752: INFO: Deleting pod "pod-subpath-test-configmap-h9qv" in namespace "subpath-7116"
[AfterEach] [sig-storage] Subpath
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:26:31.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7116" for this suite.
Nov 22 23:26:37.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:26:37.866: INFO: namespace subpath-7116 deletion completed in 6.108571593s

• [SLOW TEST:30.395 seconds]
[sig-storage] Subpath
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:26:37.866: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-721b5856-a05d-413e-950d-1ebd59fa81ab
STEP: Creating a pod to test consume secrets
Nov 22 23:26:37.923: INFO: Waiting up to 5m0s for pod "pod-secrets-1e066c61-4c83-4d31-8985-f0b72d73c4b7" in namespace "secrets-1576" to be "success or failure"
Nov 22 23:26:37.939: INFO: Pod "pod-secrets-1e066c61-4c83-4d31-8985-f0b72d73c4b7": Phase="Pending", Reason="", readiness=false. Elapsed: 16.225826ms
Nov 22 23:26:39.945: INFO: Pod "pod-secrets-1e066c61-4c83-4d31-8985-f0b72d73c4b7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022307179s
Nov 22 23:26:41.950: INFO: Pod "pod-secrets-1e066c61-4c83-4d31-8985-f0b72d73c4b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027014493s
STEP: Saw pod success
Nov 22 23:26:41.950: INFO: Pod "pod-secrets-1e066c61-4c83-4d31-8985-f0b72d73c4b7" satisfied condition "success or failure"
Nov 22 23:26:41.952: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-1e066c61-4c83-4d31-8985-f0b72d73c4b7 container secret-volume-test: 
STEP: delete the pod
Nov 22 23:26:42.014: INFO: Waiting for pod pod-secrets-1e066c61-4c83-4d31-8985-f0b72d73c4b7 to disappear
Nov 22 23:26:42.017: INFO: Pod pod-secrets-1e066c61-4c83-4d31-8985-f0b72d73c4b7 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:26:42.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1576" for this suite.
Nov 22 23:26:48.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:26:48.106: INFO: namespace secrets-1576 deletion completed in 6.086169426s

• [SLOW TEST:10.240 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:26:48.106: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Nov 22 23:26:56.241: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:26:56.249: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:26:58.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:26:58.254: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:27:00.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:27:00.253: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:27:02.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:27:02.254: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:27:04.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:27:04.254: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:27:06.250: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:27:06.254: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:27:08.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:27:08.254: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:27:10.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:27:10.253: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:27:12.250: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:27:12.255: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:27:14.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:27:14.253: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:27:16.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:27:16.253: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:27:18.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:27:18.254: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:27:20.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:27:20.254: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:27:22.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:27:22.254: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:27:24.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:27:24.254: INFO: Pod pod-with-prestop-exec-hook still exists
Nov 22 23:27:26.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Nov 22 23:27:26.253: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:27:26.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-793" for this suite.
Nov 22 23:27:48.278: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:27:48.369: INFO: namespace container-lifecycle-hook-793 deletion completed in 22.105960114s

• [SLOW TEST:60.263 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:27:48.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Nov 22 23:27:48.410: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1785'
Nov 22 23:27:48.759: INFO: stderr: ""
Nov 22 23:27:48.759: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Nov 22 23:27:48.759: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1785'
Nov 22 23:27:48.904: INFO: stderr: ""
Nov 22 23:27:48.905: INFO: stdout: "update-demo-nautilus-cg9br update-demo-nautilus-mqdj2 "
Nov 22 23:27:48.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cg9br -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1785'
Nov 22 23:27:48.999: INFO: stderr: ""
Nov 22 23:27:48.999: INFO: stdout: ""
Nov 22 23:27:48.999: INFO: update-demo-nautilus-cg9br is created but not running
Nov 22 23:27:54.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1785'
Nov 22 23:27:54.096: INFO: stderr: ""
Nov 22 23:27:54.096: INFO: stdout: "update-demo-nautilus-cg9br update-demo-nautilus-mqdj2 "
Nov 22 23:27:54.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cg9br -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1785'
Nov 22 23:27:54.181: INFO: stderr: ""
Nov 22 23:27:54.181: INFO: stdout: "true"
Nov 22 23:27:54.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cg9br -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1785'
Nov 22 23:27:54.277: INFO: stderr: ""
Nov 22 23:27:54.277: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 22 23:27:54.277: INFO: validating pod update-demo-nautilus-cg9br
Nov 22 23:27:54.282: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 22 23:27:54.282: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 22 23:27:54.282: INFO: update-demo-nautilus-cg9br is verified up and running
Nov 22 23:27:54.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqdj2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1785'
Nov 22 23:27:54.369: INFO: stderr: ""
Nov 22 23:27:54.369: INFO: stdout: "true"
Nov 22 23:27:54.369: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mqdj2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1785'
Nov 22 23:27:54.459: INFO: stderr: ""
Nov 22 23:27:54.459: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Nov 22 23:27:54.459: INFO: validating pod update-demo-nautilus-mqdj2
Nov 22 23:27:54.463: INFO: got data: {
  "image": "nautilus.jpg"
}

Nov 22 23:27:54.463: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Nov 22 23:27:54.463: INFO: update-demo-nautilus-mqdj2 is verified up and running
STEP: using delete to clean up resources
Nov 22 23:27:54.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1785'
Nov 22 23:27:54.560: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Nov 22 23:27:54.560: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Nov 22 23:27:54.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1785'
Nov 22 23:27:54.654: INFO: stderr: "No resources found.\n"
Nov 22 23:27:54.654: INFO: stdout: ""
Nov 22 23:27:54.654: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1785 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Nov 22 23:27:54.749: INFO: stderr: ""
Nov 22 23:27:54.750: INFO: stdout: "update-demo-nautilus-cg9br\nupdate-demo-nautilus-mqdj2\n"
Nov 22 23:27:55.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1785'
Nov 22 23:27:55.364: INFO: stderr: "No resources found.\n"
Nov 22 23:27:55.364: INFO: stdout: ""
Nov 22 23:27:55.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1785 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Nov 22 23:27:55.632: INFO: stderr: ""
Nov 22 23:27:55.632: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:27:55.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1785" for this suite.
Nov 22 23:28:17.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:28:17.740: INFO: namespace kubectl-1785 deletion completed in 22.101570921s

• [SLOW TEST:29.370 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:28:17.740: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Nov 22 23:28:17.814: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Nov 22 23:28:18.350: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Nov 22 23:28:20.719: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684498, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684498, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684498, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684498, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 22 23:28:22.725: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684498, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684498, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684498, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684498, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 22 23:28:25.351: INFO: Waited 619.319371ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:28:25.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-4183" for this suite.
Nov 22 23:28:32.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:28:32.078: INFO: namespace aggregator-4183 deletion completed in 6.258461243s

• [SLOW TEST:14.337 seconds]
[sig-api-machinery] Aggregator
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:28:32.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420
[It] should create an rc or deployment from an image  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Nov 22 23:28:32.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4801'
Nov 22 23:28:32.280: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Nov 22 23:28:32.280: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created
[AfterEach] [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426
Nov 22 23:28:34.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4801'
Nov 22 23:28:34.586: INFO: stderr: ""
Nov 22 23:28:34.586: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:28:34.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4801" for this suite.
Nov 22 23:28:56.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:28:56.950: INFO: namespace kubectl-4801 deletion completed in 22.245715354s

• [SLOW TEST:24.872 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run default
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create an rc or deployment from an image  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:28:56.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 22 23:28:57.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:29:01.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6913" for this suite.
Nov 22 23:29:51.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:29:51.262: INFO: namespace pods-6913 deletion completed in 50.090604539s

• [SLOW TEST:54.312 seconds]
[k8s.io] Pods
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:29:51.263: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 22 23:29:51.292: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Nov 22 23:29:51.327: INFO: Pod name sample-pod: Found 0 pods out of 1
Nov 22 23:29:56.331: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Nov 22 23:29:56.331: INFO: Creating deployment "test-rolling-update-deployment"
Nov 22 23:29:56.336: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Nov 22 23:29:56.344: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Nov 22 23:29:58.362: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Nov 22 23:29:58.365: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684596, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684596, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684596, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684596, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 22 23:30:00.405: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Nov 22 23:30:00.414: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-457,SelfLink:/apis/apps/v1/namespaces/deployment-457/deployments/test-rolling-update-deployment,UID:0adfc765-5281-4b73-a503-8889151678a0,ResourceVersion:10989705,Generation:1,CreationTimestamp:2020-11-22 23:29:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-11-22 23:29:56 +0000 UTC 2020-11-22 23:29:56 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-11-22 23:30:00 +0000 UTC 2020-11-22 23:29:56 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Nov 22 23:30:00.417: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-457,SelfLink:/apis/apps/v1/namespaces/deployment-457/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:3df05b06-6012-445d-ad61-d032117d553b,ResourceVersion:10989693,Generation:1,CreationTimestamp:2020-11-22 23:29:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0adfc765-5281-4b73-a503-8889151678a0 0xc003de2d97 0xc003de2d98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Nov 22 23:30:00.417: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Nov 22 23:30:00.417: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-457,SelfLink:/apis/apps/v1/namespaces/deployment-457/replicasets/test-rolling-update-controller,UID:323e564b-034d-464d-ba7c-c25bb9859625,ResourceVersion:10989703,Generation:2,CreationTimestamp:2020-11-22 23:29:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 0adfc765-5281-4b73-a503-8889151678a0 0xc003de2caf 0xc003de2cc0}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Nov 22 23:30:00.419: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-6lgvv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-6lgvv,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-457,SelfLink:/api/v1/namespaces/deployment-457/pods/test-rolling-update-deployment-79f6b9d75c-6lgvv,UID:02a9ea35-85c7-4d09-8ad7-0e51b7875275,ResourceVersion:10989692,Generation:0,CreationTimestamp:2020-11-22 23:29:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 3df05b06-6012-445d-ad61-d032117d553b 0xc003de3657 0xc003de3658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bzp9n {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bzp9n,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-bzp9n true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003de36d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003de36f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:29:56 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:30:00 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:30:00 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:29:56 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.42,StartTime:2020-11-22 23:29:56 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-11-22 23:29:59 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://b933f8d53ed4f074e8a575585d9467ff8c4b99c90d961384f658962697e4b92c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:30:00.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-457" for this suite.
Nov 22 23:30:08.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:30:08.525: INFO: namespace deployment-457 deletion completed in 8.103165105s

• [SLOW TEST:17.262 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:30:08.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 22 23:30:08.614: INFO: Pod name rollover-pod: Found 0 pods out of 1
Nov 22 23:30:13.618: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Nov 22 23:30:13.618: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Nov 22 23:30:15.622: INFO: Creating deployment "test-rollover-deployment"
Nov 22 23:30:15.633: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Nov 22 23:30:17.639: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Nov 22 23:30:17.646: INFO: Ensure that both replica sets have 1 created replica
Nov 22 23:30:17.652: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Nov 22 23:30:17.658: INFO: Updating deployment test-rollover-deployment
Nov 22 23:30:17.658: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Nov 22 23:30:19.682: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Nov 22 23:30:19.687: INFO: Make sure deployment "test-rollover-deployment" is complete
Nov 22 23:30:19.691: INFO: all replica sets need to contain the pod-template-hash label
Nov 22 23:30:19.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684617, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 22 23:30:21.699: INFO: all replica sets need to contain the pod-template-hash label
Nov 22 23:30:21.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684621, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 22 23:30:23.699: INFO: all replica sets need to contain the pod-template-hash label
Nov 22 23:30:23.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684621, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 22 23:30:25.699: INFO: all replica sets need to contain the pod-template-hash label
Nov 22 23:30:25.700: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684621, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 22 23:30:27.699: INFO: all replica sets need to contain the pod-template-hash label
Nov 22 23:30:27.699: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684621, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 22 23:30:29.699: INFO: all replica sets need to contain the pod-template-hash label
Nov 22 23:30:29.699: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684621, loc:(*time.Location)(0x7edea20)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684615, loc:(*time.Location)(0x7edea20)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)}
Nov 22 23:30:31.699: INFO: 
Nov 22 23:30:31.699: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Nov 22 23:30:31.707: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-6094,SelfLink:/apis/apps/v1/namespaces/deployment-6094/deployments/test-rollover-deployment,UID:669263e0-3cdc-49df-ad01-0305fc1343f3,ResourceVersion:10989865,Generation:2,CreationTimestamp:2020-11-22 23:30:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-11-22 23:30:15 +0000 UTC 2020-11-22 23:30:15 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-11-22 23:30:31 +0000 UTC 2020-11-22 23:30:15 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Nov 22 23:30:31.710: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-6094,SelfLink:/apis/apps/v1/namespaces/deployment-6094/replicasets/test-rollover-deployment-854595fc44,UID:742b909c-d6d3-4fac-81bc-2debd005a725,ResourceVersion:10989854,Generation:2,CreationTimestamp:2020-11-22 23:30:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 669263e0-3cdc-49df-ad01-0305fc1343f3 0xc0038a8407 0xc0038a8408}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Nov 22 23:30:31.711: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Nov 22 23:30:31.711: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-6094,SelfLink:/apis/apps/v1/namespaces/deployment-6094/replicasets/test-rollover-controller,UID:3518b66d-02b4-49fd-b35c-98bfb9d0a92c,ResourceVersion:10989863,Generation:2,CreationTimestamp:2020-11-22 23:30:08 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 669263e0-3cdc-49df-ad01-0305fc1343f3 0xc0038a831f 0xc0038a8330}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Nov 22 23:30:31.711: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-6094,SelfLink:/apis/apps/v1/namespaces/deployment-6094/replicasets/test-rollover-deployment-9b8b997cf,UID:24bfe624-f91a-46fc-9453-855535d383eb,ResourceVersion:10989817,Generation:2,CreationTimestamp:2020-11-22 23:30:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 669263e0-3cdc-49df-ad01-0305fc1343f3 0xc0038a84d0 0xc0038a84d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Nov 22 23:30:31.715: INFO: Pod "test-rollover-deployment-854595fc44-s2ff4" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-s2ff4,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-6094,SelfLink:/api/v1/namespaces/deployment-6094/pods/test-rollover-deployment-854595fc44-s2ff4,UID:5f5aae8a-30b7-4711-a704-9e0000f2b107,ResourceVersion:10989831,Generation:0,CreationTimestamp:2020-11-22 23:30:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 742b909c-d6d3-4fac-81bc-2debd005a725 0xc003f22dd7 0xc003f22dd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-6fzb9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-6fzb9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-6fzb9 true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc003f22e50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc003f22e70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:30:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:30:21 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:30:21 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:30:17 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.20,StartTime:2020-11-22 23:30:17 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-11-22 23:30:20 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://ba790aa945e74853359e5b24c55fb90c3ba012abda2e82a34a87db70bb9e04c8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:30:31.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6094" for this suite.
Nov 22 23:30:39.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:30:39.811: INFO: namespace deployment-6094 deletion completed in 8.092776522s

• [SLOW TEST:31.286 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:30:39.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-5959
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Nov 22 23:30:39.954: INFO: Found 0 stateful pods, waiting for 3
Nov 22 23:30:49.958: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Nov 22 23:30:49.958: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Nov 22 23:30:49.958: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Nov 22 23:30:49.981: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Nov 22 23:31:00.091: INFO: Updating stateful set ss2
Nov 22 23:31:00.103: INFO: Waiting for Pod statefulset-5959/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Nov 22 23:31:10.586: INFO: Found 2 stateful pods, waiting for 3
Nov 22 23:31:20.591: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Nov 22 23:31:20.591: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Nov 22 23:31:20.591: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Nov 22 23:31:20.614: INFO: Updating stateful set ss2
Nov 22 23:31:20.622: INFO: Waiting for Pod statefulset-5959/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Nov 22 23:31:30.646: INFO: Updating stateful set ss2
Nov 22 23:31:30.685: INFO: Waiting for StatefulSet statefulset-5959/ss2 to complete update
Nov 22 23:31:30.685: INFO: Waiting for Pod statefulset-5959/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Nov 22 23:31:40.692: INFO: Deleting all statefulset in ns statefulset-5959
Nov 22 23:31:40.694: INFO: Scaling statefulset ss2 to 0
Nov 22 23:32:00.713: INFO: Waiting for statefulset status.replicas updated to 0
Nov 22 23:32:00.716: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:32:00.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5959" for this suite.
Nov 22 23:32:06.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:32:06.922: INFO: namespace statefulset-5959 deletion completed in 6.185467341s

• [SLOW TEST:87.110 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:32:06.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Nov 22 23:32:06.993: INFO: Waiting up to 5m0s for pod "pod-f7913e67-2c3b-4275-a843-f55a364bae1e" in namespace "emptydir-9104" to be "success or failure"
Nov 22 23:32:07.008: INFO: Pod "pod-f7913e67-2c3b-4275-a843-f55a364bae1e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.418266ms
Nov 22 23:32:09.034: INFO: Pod "pod-f7913e67-2c3b-4275-a843-f55a364bae1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040644126s
Nov 22 23:32:11.038: INFO: Pod "pod-f7913e67-2c3b-4275-a843-f55a364bae1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044435452s
STEP: Saw pod success
Nov 22 23:32:11.038: INFO: Pod "pod-f7913e67-2c3b-4275-a843-f55a364bae1e" satisfied condition "success or failure"
Nov 22 23:32:11.040: INFO: Trying to get logs from node iruya-worker pod pod-f7913e67-2c3b-4275-a843-f55a364bae1e container test-container: 
STEP: delete the pod
Nov 22 23:32:11.057: INFO: Waiting for pod pod-f7913e67-2c3b-4275-a843-f55a364bae1e to disappear
Nov 22 23:32:11.062: INFO: Pod pod-f7913e67-2c3b-4275-a843-f55a364bae1e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:32:11.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9104" for this suite.
Nov 22 23:32:17.274: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:32:17.356: INFO: namespace emptydir-9104 deletion completed in 6.279439782s

• [SLOW TEST:10.434 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:32:17.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Nov 22 23:32:17.432: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Nov 22 23:32:17.450: INFO: Waiting for terminating namespaces to be deleted...
Nov 22 23:32:17.453: INFO: 
Logging pods the kubelet thinks is on node iruya-worker before test
Nov 22 23:32:17.457: INFO: kindnet-7bsvw from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Nov 22 23:32:17.457: INFO: 	Container kindnet-cni ready: true, restart count 0
Nov 22 23:32:17.457: INFO: kube-proxy-mtljr from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Nov 22 23:32:17.457: INFO: 	Container kube-proxy ready: true, restart count 0
Nov 22 23:32:17.457: INFO: 
Logging pods the kubelet thinks is on node iruya-worker2 before test
Nov 22 23:32:17.464: INFO: kindnet-djqgh from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Nov 22 23:32:17.464: INFO: 	Container kindnet-cni ready: true, restart count 0
Nov 22 23:32:17.464: INFO: kube-proxy-52wt5 from kube-system started at 2020-09-23 08:26:08 +0000 UTC (1 container statuses recorded)
Nov 22 23:32:17.464: INFO: 	Container kube-proxy ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-worker
STEP: verifying the node has the label node iruya-worker2
Nov 22 23:32:17.590: INFO: Pod kindnet-7bsvw requesting resource cpu=100m on Node iruya-worker
Nov 22 23:32:17.590: INFO: Pod kindnet-djqgh requesting resource cpu=100m on Node iruya-worker2
Nov 22 23:32:17.590: INFO: Pod kube-proxy-52wt5 requesting resource cpu=0m on Node iruya-worker2
Nov 22 23:32:17.590: INFO: Pod kube-proxy-mtljr requesting resource cpu=0m on Node iruya-worker
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6b60900e-1421-4c44-83c5-f72fbef9158c.1649f87847165d6d], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1785/filler-pod-6b60900e-1421-4c44-83c5-f72fbef9158c to iruya-worker]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6b60900e-1421-4c44-83c5-f72fbef9158c.1649f878b201e963], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6b60900e-1421-4c44-83c5-f72fbef9158c.1649f879023e53c0], Reason = [Created], Message = [Created container filler-pod-6b60900e-1421-4c44-83c5-f72fbef9158c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-6b60900e-1421-4c44-83c5-f72fbef9158c.1649f8790fd1f0c0], Reason = [Started], Message = [Started container filler-pod-6b60900e-1421-4c44-83c5-f72fbef9158c]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-eb8fbb7c-119c-453e-929a-e0b25ee24734.1649f878486612bb], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1785/filler-pod-eb8fbb7c-119c-453e-929a-e0b25ee24734 to iruya-worker2]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-eb8fbb7c-119c-453e-929a-e0b25ee24734.1649f878b880ca74], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-eb8fbb7c-119c-453e-929a-e0b25ee24734.1649f87905d6260d], Reason = [Created], Message = [Created container filler-pod-eb8fbb7c-119c-453e-929a-e0b25ee24734]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-eb8fbb7c-119c-453e-929a-e0b25ee24734.1649f87915314bd2], Reason = [Started], Message = [Started container filler-pod-eb8fbb7c-119c-453e-929a-e0b25ee24734]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.1649f879399f98b1], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-worker
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-worker2
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:32:22.779: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-1785" for this suite.
Nov 22 23:32:28.795: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:32:28.867: INFO: namespace sched-pred-1785 deletion completed in 6.083219213s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:11.511 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:32:28.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6010.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6010.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6010.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6010.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6010.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6010.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6010.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6010.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6010.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6010.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6010.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 214.245.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.245.214_udp@PTR;check="$$(dig +tcp +noall +answer +search 214.245.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.245.214_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6010.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6010.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6010.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6010.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6010.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6010.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6010.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6010.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6010.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6010.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6010.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 214.245.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.245.214_udp@PTR;check="$$(dig +tcp +noall +answer +search 214.245.105.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.105.245.214_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 22 23:32:35.150: INFO: Unable to read wheezy_udp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:35.179: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:35.182: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:35.184: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:35.206: INFO: Unable to read jessie_udp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:35.209: INFO: Unable to read jessie_tcp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:35.213: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:35.216: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:35.236: INFO: Lookups using dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83 failed for: [wheezy_udp@dns-test-service.dns-6010.svc.cluster.local wheezy_tcp@dns-test-service.dns-6010.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local jessie_udp@dns-test-service.dns-6010.svc.cluster.local jessie_tcp@dns-test-service.dns-6010.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local]

Nov 22 23:32:40.241: INFO: Unable to read wheezy_udp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:40.245: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:40.247: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:40.250: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:40.271: INFO: Unable to read jessie_udp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:40.274: INFO: Unable to read jessie_tcp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:40.277: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:40.280: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:40.298: INFO: Lookups using dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83 failed for: [wheezy_udp@dns-test-service.dns-6010.svc.cluster.local wheezy_tcp@dns-test-service.dns-6010.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local jessie_udp@dns-test-service.dns-6010.svc.cluster.local jessie_tcp@dns-test-service.dns-6010.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local]

Nov 22 23:32:45.241: INFO: Unable to read wheezy_udp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:45.244: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:45.247: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:45.251: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:45.271: INFO: Unable to read jessie_udp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:45.273: INFO: Unable to read jessie_tcp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:45.275: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:45.278: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:45.295: INFO: Lookups using dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83 failed for: [wheezy_udp@dns-test-service.dns-6010.svc.cluster.local wheezy_tcp@dns-test-service.dns-6010.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local jessie_udp@dns-test-service.dns-6010.svc.cluster.local jessie_tcp@dns-test-service.dns-6010.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local]

Nov 22 23:32:50.241: INFO: Unable to read wheezy_udp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:50.245: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:50.248: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:50.251: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:50.271: INFO: Unable to read jessie_udp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:50.273: INFO: Unable to read jessie_tcp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:50.276: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:50.278: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:50.296: INFO: Lookups using dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83 failed for: [wheezy_udp@dns-test-service.dns-6010.svc.cluster.local wheezy_tcp@dns-test-service.dns-6010.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local jessie_udp@dns-test-service.dns-6010.svc.cluster.local jessie_tcp@dns-test-service.dns-6010.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local]

Nov 22 23:32:55.241: INFO: Unable to read wheezy_udp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:55.244: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:55.248: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:55.251: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:55.273: INFO: Unable to read jessie_udp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:55.276: INFO: Unable to read jessie_tcp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:55.279: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:55.281: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:32:55.301: INFO: Lookups using dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83 failed for: [wheezy_udp@dns-test-service.dns-6010.svc.cluster.local wheezy_tcp@dns-test-service.dns-6010.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local jessie_udp@dns-test-service.dns-6010.svc.cluster.local jessie_tcp@dns-test-service.dns-6010.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local]

Nov 22 23:33:00.240: INFO: Unable to read wheezy_udp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:33:00.243: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:33:00.246: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:33:00.248: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:33:00.266: INFO: Unable to read jessie_udp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:33:00.269: INFO: Unable to read jessie_tcp@dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:33:00.272: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:33:00.274: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local from pod dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83: the server could not find the requested resource (get pods dns-test-57badfda-3c4e-4329-8486-1379d4321a83)
Nov 22 23:33:00.291: INFO: Lookups using dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83 failed for: [wheezy_udp@dns-test-service.dns-6010.svc.cluster.local wheezy_tcp@dns-test-service.dns-6010.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local jessie_udp@dns-test-service.dns-6010.svc.cluster.local jessie_tcp@dns-test-service.dns-6010.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6010.svc.cluster.local]

Nov 22 23:33:05.328: INFO: DNS probes using dns-6010/dns-test-57badfda-3c4e-4329-8486-1379d4321a83 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:33:05.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6010" for this suite.
Nov 22 23:33:12.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:33:12.192: INFO: namespace dns-6010 deletion completed in 6.164794732s

• [SLOW TEST:43.326 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:33:12.193: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 22 23:33:12.266: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Nov 22 23:33:17.272: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Nov 22 23:33:17.272: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Nov 22 23:33:17.393: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-2752,SelfLink:/apis/apps/v1/namespaces/deployment-2752/deployments/test-cleanup-deployment,UID:700e0648-aa23-4a1f-9020-c6f8cc8d3065,ResourceVersion:10990622,Generation:1,CreationTimestamp:2020-11-22 23:33:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Nov 22 23:33:17.423: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-2752,SelfLink:/apis/apps/v1/namespaces/deployment-2752/replicasets/test-cleanup-deployment-55bbcbc84c,UID:86e05c5c-ce93-4c35-aeff-2b965895d6e3,ResourceVersion:10990624,Generation:1,CreationTimestamp:2020-11-22 23:33:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 700e0648-aa23-4a1f-9020-c6f8cc8d3065 0xc002e062d7 0xc002e062d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Nov 22 23:33:17.423: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Nov 22 23:33:17.423: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-2752,SelfLink:/apis/apps/v1/namespaces/deployment-2752/replicasets/test-cleanup-controller,UID:2ce640ab-2cb9-4355-8703-c1fdf23e1448,ResourceVersion:10990623,Generation:1,CreationTimestamp:2020-11-22 23:33:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 700e0648-aa23-4a1f-9020-c6f8cc8d3065 0xc002e06187 0xc002e06188}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Nov 22 23:33:17.453: INFO: Pod "test-cleanup-controller-dbhw5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-dbhw5,GenerateName:test-cleanup-controller-,Namespace:deployment-2752,SelfLink:/api/v1/namespaces/deployment-2752/pods/test-cleanup-controller-dbhw5,UID:f8fde4cf-3115-426e-89b5-b0dc28fb57cb,ResourceVersion:10990619,Generation:0,CreationTimestamp:2020-11-22 23:33:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 2ce640ab-2cb9-4355-8703-c1fdf23e1448 0xc002e07037 0xc002e07038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-fzphk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fzphk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-fzphk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002e070b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002e070d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:33:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:33:16 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:33:16 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:33:12 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.27,StartTime:2020-11-22 23:33:12 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-22 23:33:15 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c6216424050debbb0dc02fe98c60b7beaf79b8dd2b76f19ae4c1754ca952f66e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:33:17.453: INFO: Pod "test-cleanup-deployment-55bbcbc84c-z4h2h" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-z4h2h,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-2752,SelfLink:/api/v1/namespaces/deployment-2752/pods/test-cleanup-deployment-55bbcbc84c-z4h2h,UID:1c8c042e-0ede-480a-9275-c597ce66d10e,ResourceVersion:10990628,Generation:0,CreationTimestamp:2020-11-22 23:33:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 86e05c5c-ce93-4c35-aeff-2b965895d6e3 0xc002e071b7 0xc002e071b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-fzphk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-fzphk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-fzphk true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002e07230} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002e07250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:33:17 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:33:17.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2752" for this suite.
Nov 22 23:33:23.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:33:23.632: INFO: namespace deployment-2752 deletion completed in 6.166299966s

• [SLOW TEST:11.439 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:33:23.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Nov 22 23:33:23.785: INFO: Waiting up to 5m0s for pod "pod-87825c9c-ebbe-4865-9f4d-3e42b3c0a281" in namespace "emptydir-7617" to be "success or failure"
Nov 22 23:33:23.788: INFO: Pod "pod-87825c9c-ebbe-4865-9f4d-3e42b3c0a281": Phase="Pending", Reason="", readiness=false. Elapsed: 3.039546ms
Nov 22 23:33:25.792: INFO: Pod "pod-87825c9c-ebbe-4865-9f4d-3e42b3c0a281": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006506914s
Nov 22 23:33:27.796: INFO: Pod "pod-87825c9c-ebbe-4865-9f4d-3e42b3c0a281": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010790862s
STEP: Saw pod success
Nov 22 23:33:27.796: INFO: Pod "pod-87825c9c-ebbe-4865-9f4d-3e42b3c0a281" satisfied condition "success or failure"
Nov 22 23:33:27.800: INFO: Trying to get logs from node iruya-worker2 pod pod-87825c9c-ebbe-4865-9f4d-3e42b3c0a281 container test-container: 
STEP: delete the pod
Nov 22 23:33:27.847: INFO: Waiting for pod pod-87825c9c-ebbe-4865-9f4d-3e42b3c0a281 to disappear
Nov 22 23:33:27.854: INFO: Pod pod-87825c9c-ebbe-4865-9f4d-3e42b3c0a281 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:33:27.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7617" for this suite.
Nov 22 23:33:33.955: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:33:34.032: INFO: namespace emptydir-7617 deletion completed in 6.175376334s

• [SLOW TEST:10.401 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:33:34.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Nov 22 23:33:34.088: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5291,SelfLink:/api/v1/namespaces/watch-5291/configmaps/e2e-watch-test-configmap-a,UID:2b9456cd-31d4-4547-8d2b-f939f4728083,ResourceVersion:10990723,Generation:0,CreationTimestamp:2020-11-22 23:33:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Nov 22 23:33:34.088: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5291,SelfLink:/api/v1/namespaces/watch-5291/configmaps/e2e-watch-test-configmap-a,UID:2b9456cd-31d4-4547-8d2b-f939f4728083,ResourceVersion:10990723,Generation:0,CreationTimestamp:2020-11-22 23:33:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Nov 22 23:33:44.097: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5291,SelfLink:/api/v1/namespaces/watch-5291/configmaps/e2e-watch-test-configmap-a,UID:2b9456cd-31d4-4547-8d2b-f939f4728083,ResourceVersion:10990743,Generation:0,CreationTimestamp:2020-11-22 23:33:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Nov 22 23:33:44.097: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5291,SelfLink:/api/v1/namespaces/watch-5291/configmaps/e2e-watch-test-configmap-a,UID:2b9456cd-31d4-4547-8d2b-f939f4728083,ResourceVersion:10990743,Generation:0,CreationTimestamp:2020-11-22 23:33:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Nov 22 23:33:54.106: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5291,SelfLink:/api/v1/namespaces/watch-5291/configmaps/e2e-watch-test-configmap-a,UID:2b9456cd-31d4-4547-8d2b-f939f4728083,ResourceVersion:10990763,Generation:0,CreationTimestamp:2020-11-22 23:33:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Nov 22 23:33:54.106: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5291,SelfLink:/api/v1/namespaces/watch-5291/configmaps/e2e-watch-test-configmap-a,UID:2b9456cd-31d4-4547-8d2b-f939f4728083,ResourceVersion:10990763,Generation:0,CreationTimestamp:2020-11-22 23:33:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Nov 22 23:34:04.119: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5291,SelfLink:/api/v1/namespaces/watch-5291/configmaps/e2e-watch-test-configmap-a,UID:2b9456cd-31d4-4547-8d2b-f939f4728083,ResourceVersion:10990783,Generation:0,CreationTimestamp:2020-11-22 23:33:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Nov 22 23:34:04.119: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-5291,SelfLink:/api/v1/namespaces/watch-5291/configmaps/e2e-watch-test-configmap-a,UID:2b9456cd-31d4-4547-8d2b-f939f4728083,ResourceVersion:10990783,Generation:0,CreationTimestamp:2020-11-22 23:33:34 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Nov 22 23:34:14.126: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5291,SelfLink:/api/v1/namespaces/watch-5291/configmaps/e2e-watch-test-configmap-b,UID:ba6c4c4c-06b7-462b-b21e-73f2d5bec622,ResourceVersion:10990804,Generation:0,CreationTimestamp:2020-11-22 23:34:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Nov 22 23:34:14.126: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5291,SelfLink:/api/v1/namespaces/watch-5291/configmaps/e2e-watch-test-configmap-b,UID:ba6c4c4c-06b7-462b-b21e-73f2d5bec622,ResourceVersion:10990804,Generation:0,CreationTimestamp:2020-11-22 23:34:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Nov 22 23:34:24.132: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5291,SelfLink:/api/v1/namespaces/watch-5291/configmaps/e2e-watch-test-configmap-b,UID:ba6c4c4c-06b7-462b-b21e-73f2d5bec622,ResourceVersion:10990824,Generation:0,CreationTimestamp:2020-11-22 23:34:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Nov 22 23:34:24.132: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-5291,SelfLink:/api/v1/namespaces/watch-5291/configmaps/e2e-watch-test-configmap-b,UID:ba6c4c4c-06b7-462b-b21e-73f2d5bec622,ResourceVersion:10990824,Generation:0,CreationTimestamp:2020-11-22 23:34:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:34:34.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-5291" for this suite.
Nov 22 23:34:40.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:34:40.255: INFO: namespace watch-5291 deletion completed in 6.117365712s

• [SLOW TEST:66.222 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:34:40.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-7e9db6b2-1508-4da2-9c36-5a91d5f3945e
STEP: Creating a pod to test consume configMaps
Nov 22 23:34:40.343: INFO: Waiting up to 5m0s for pod "pod-configmaps-f0495eaf-b978-4da7-8a62-1cdcf77581f5" in namespace "configmap-1023" to be "success or failure"
Nov 22 23:34:40.347: INFO: Pod "pod-configmaps-f0495eaf-b978-4da7-8a62-1cdcf77581f5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.780819ms
Nov 22 23:34:42.351: INFO: Pod "pod-configmaps-f0495eaf-b978-4da7-8a62-1cdcf77581f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008347511s
Nov 22 23:34:44.355: INFO: Pod "pod-configmaps-f0495eaf-b978-4da7-8a62-1cdcf77581f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012394576s
STEP: Saw pod success
Nov 22 23:34:44.355: INFO: Pod "pod-configmaps-f0495eaf-b978-4da7-8a62-1cdcf77581f5" satisfied condition "success or failure"
Nov 22 23:34:44.359: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-f0495eaf-b978-4da7-8a62-1cdcf77581f5 container configmap-volume-test: 
STEP: delete the pod
Nov 22 23:34:44.380: INFO: Waiting for pod pod-configmaps-f0495eaf-b978-4da7-8a62-1cdcf77581f5 to disappear
Nov 22 23:34:44.383: INFO: Pod pod-configmaps-f0495eaf-b978-4da7-8a62-1cdcf77581f5 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:34:44.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1023" for this suite.
Nov 22 23:34:50.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:34:50.476: INFO: namespace configmap-1023 deletion completed in 6.089853637s

• [SLOW TEST:10.221 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:34:50.476: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-92c99016-adad-4e5c-9383-e6f66af1670f
STEP: Creating a pod to test consume secrets
Nov 22 23:34:50.600: INFO: Waiting up to 5m0s for pod "pod-secrets-b65f9cbf-8757-43ff-bad6-d916feaae790" in namespace "secrets-8003" to be "success or failure"
Nov 22 23:34:50.617: INFO: Pod "pod-secrets-b65f9cbf-8757-43ff-bad6-d916feaae790": Phase="Pending", Reason="", readiness=false. Elapsed: 17.273121ms
Nov 22 23:34:52.621: INFO: Pod "pod-secrets-b65f9cbf-8757-43ff-bad6-d916feaae790": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020641817s
Nov 22 23:34:54.624: INFO: Pod "pod-secrets-b65f9cbf-8757-43ff-bad6-d916feaae790": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024035017s
STEP: Saw pod success
Nov 22 23:34:54.624: INFO: Pod "pod-secrets-b65f9cbf-8757-43ff-bad6-d916feaae790" satisfied condition "success or failure"
Nov 22 23:34:54.627: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-b65f9cbf-8757-43ff-bad6-d916feaae790 container secret-volume-test: 
STEP: delete the pod
Nov 22 23:34:54.654: INFO: Waiting for pod pod-secrets-b65f9cbf-8757-43ff-bad6-d916feaae790 to disappear
Nov 22 23:34:54.671: INFO: Pod pod-secrets-b65f9cbf-8757-43ff-bad6-d916feaae790 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:34:54.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8003" for this suite.
Nov 22 23:35:00.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:35:00.766: INFO: namespace secrets-8003 deletion completed in 6.091902249s

• [SLOW TEST:10.289 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:35:00.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Nov 22 23:35:00.840: INFO: Waiting up to 5m0s for pod "client-containers-ebeb8818-7dbf-4f69-9008-afa2098116f3" in namespace "containers-2259" to be "success or failure"
Nov 22 23:35:00.859: INFO: Pod "client-containers-ebeb8818-7dbf-4f69-9008-afa2098116f3": Phase="Pending", Reason="", readiness=false. Elapsed: 19.195512ms
Nov 22 23:35:02.862: INFO: Pod "client-containers-ebeb8818-7dbf-4f69-9008-afa2098116f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022503767s
Nov 22 23:35:04.866: INFO: Pod "client-containers-ebeb8818-7dbf-4f69-9008-afa2098116f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026696299s
STEP: Saw pod success
Nov 22 23:35:04.866: INFO: Pod "client-containers-ebeb8818-7dbf-4f69-9008-afa2098116f3" satisfied condition "success or failure"
Nov 22 23:35:04.869: INFO: Trying to get logs from node iruya-worker2 pod client-containers-ebeb8818-7dbf-4f69-9008-afa2098116f3 container test-container: 
STEP: delete the pod
Nov 22 23:35:04.923: INFO: Waiting for pod client-containers-ebeb8818-7dbf-4f69-9008-afa2098116f3 to disappear
Nov 22 23:35:05.030: INFO: Pod client-containers-ebeb8818-7dbf-4f69-9008-afa2098116f3 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:35:05.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2259" for this suite.
Nov 22 23:35:11.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:35:11.134: INFO: namespace containers-2259 deletion completed in 6.100353511s

• [SLOW TEST:10.367 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:35:11.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:35:16.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5713" for this suite.
Nov 22 23:35:50.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:35:50.347: INFO: namespace replication-controller-5713 deletion completed in 34.094960565s

• [SLOW TEST:39.213 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:35:50.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:35:56.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9050" for this suite.
Nov 22 23:36:04.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:36:04.687: INFO: namespace emptydir-wrapper-9050 deletion completed in 8.095645032s

• [SLOW TEST:14.341 seconds]
[sig-storage] EmptyDir wrapper volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:36:04.688: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Nov 22 23:36:04.781: INFO: PodSpec: initContainers in spec.initContainers
Nov 22 23:36:49.835: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d8d94a7f-cc37-4a1c-b67d-f279bd0c67ae", GenerateName:"", Namespace:"init-container-6202", SelfLink:"/api/v1/namespaces/init-container-6202/pods/pod-init-d8d94a7f-cc37-4a1c-b67d-f279bd0c67ae", UID:"92e6facc-251c-4a5e-8329-247356068028", ResourceVersion:"10991273", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63741684964, loc:(*time.Location)(0x7edea20)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"781173397"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-rjlf5", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002e38cc0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rjlf5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rjlf5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-rjlf5", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc001adf988), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc003bd5c20), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001adfbd0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc001adfc50)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc001adfc58), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc001adfc5c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684964, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684964, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684964, loc:(*time.Location)(0x7edea20)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63741684964, loc:(*time.Location)(0x7edea20)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.18.0.6", PodIP:"10.244.1.30", StartTime:(*v1.Time)(0xc002b72900), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc002b72980), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000ad2d20)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d1a0d4accf2b1e0ab9d286aa150876f3279cf6ce6b14e4daaeca8a9eca74f00a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b729a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002b72940), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:36:49.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6202" for this suite.
Nov 22 23:37:12.082: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:37:12.155: INFO: namespace init-container-6202 deletion completed in 22.275074291s

• [SLOW TEST:67.467 seconds]
[k8s.io] InitContainer [NodeConformance]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:37:12.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685
[It] should create a pod from an image when restart is Never  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Nov 22 23:37:12.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-86'
Nov 22 23:37:14.799: INFO: stderr: ""
Nov 22 23:37:14.799: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod was created
[AfterEach] [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690
Nov 22 23:37:14.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-86'
Nov 22 23:37:25.371: INFO: stderr: ""
Nov 22 23:37:25.371: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:37:25.371: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-86" for this suite.
Nov 22 23:37:31.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:37:31.500: INFO: namespace kubectl-86 deletion completed in 6.116742628s

• [SLOW TEST:19.344 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a pod from an image when restart is Never  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:37:31.500: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Nov 22 23:37:39.608: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Nov 22 23:37:39.691: INFO: Pod pod-with-poststart-http-hook still exists
Nov 22 23:37:41.691: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Nov 22 23:37:41.695: INFO: Pod pod-with-poststart-http-hook still exists
Nov 22 23:37:43.691: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Nov 22 23:37:43.694: INFO: Pod pod-with-poststart-http-hook still exists
Nov 22 23:37:45.691: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Nov 22 23:37:45.695: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:37:45.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3701" for this suite.
Nov 22 23:38:09.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:38:09.808: INFO: namespace container-lifecycle-hook-3701 deletion completed in 24.108803015s

• [SLOW TEST:38.308 seconds]
[k8s.io] Container Lifecycle Hook
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:38:09.808: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Nov 22 23:38:09.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Nov 22 23:38:10.020: INFO: stderr: ""
Nov 22 23:38:10.020: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:38:10.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8156" for this suite.
Nov 22 23:38:16.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:38:16.171: INFO: namespace kubectl-8156 deletion completed in 6.146787822s

• [SLOW TEST:6.363 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:38:16.172: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:38:48.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1054" for this suite.
Nov 22 23:38:54.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:38:54.969: INFO: namespace container-runtime-1054 deletion completed in 6.081944226s

• [SLOW TEST:38.797 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:38:54.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-76462bc1-55e5-499f-ab69-36a93fe8e909
STEP: Creating a pod to test consume secrets
Nov 22 23:38:55.048: INFO: Waiting up to 5m0s for pod "pod-secrets-9aeda0c6-a7fe-472a-99f8-c08db8d910b8" in namespace "secrets-6587" to be "success or failure"
Nov 22 23:38:55.118: INFO: Pod "pod-secrets-9aeda0c6-a7fe-472a-99f8-c08db8d910b8": Phase="Pending", Reason="", readiness=false. Elapsed: 69.234865ms
Nov 22 23:38:57.122: INFO: Pod "pod-secrets-9aeda0c6-a7fe-472a-99f8-c08db8d910b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073447526s
Nov 22 23:38:59.157: INFO: Pod "pod-secrets-9aeda0c6-a7fe-472a-99f8-c08db8d910b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.109176546s
STEP: Saw pod success
Nov 22 23:38:59.158: INFO: Pod "pod-secrets-9aeda0c6-a7fe-472a-99f8-c08db8d910b8" satisfied condition "success or failure"
Nov 22 23:38:59.160: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-9aeda0c6-a7fe-472a-99f8-c08db8d910b8 container secret-volume-test: 
STEP: delete the pod
Nov 22 23:38:59.204: INFO: Waiting for pod pod-secrets-9aeda0c6-a7fe-472a-99f8-c08db8d910b8 to disappear
Nov 22 23:38:59.214: INFO: Pod pod-secrets-9aeda0c6-a7fe-472a-99f8-c08db8d910b8 no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:38:59.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6587" for this suite.
Nov 22 23:39:05.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:39:05.300: INFO: namespace secrets-6587 deletion completed in 6.082663245s

• [SLOW TEST:10.330 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:39:05.300: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-2e854d1e-5c42-44c6-a94d-f58dffeb6ced
STEP: Creating a pod to test consume configMaps
Nov 22 23:39:05.360: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-70aadc88-6487-4321-a822-ad00ccb446fd" in namespace "projected-2643" to be "success or failure"
Nov 22 23:39:05.370: INFO: Pod "pod-projected-configmaps-70aadc88-6487-4321-a822-ad00ccb446fd": Phase="Pending", Reason="", readiness=false. Elapsed: 10.446018ms
Nov 22 23:39:07.376: INFO: Pod "pod-projected-configmaps-70aadc88-6487-4321-a822-ad00ccb446fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016338573s
Nov 22 23:39:09.381: INFO: Pod "pod-projected-configmaps-70aadc88-6487-4321-a822-ad00ccb446fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020532368s
STEP: Saw pod success
Nov 22 23:39:09.381: INFO: Pod "pod-projected-configmaps-70aadc88-6487-4321-a822-ad00ccb446fd" satisfied condition "success or failure"
Nov 22 23:39:09.383: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-70aadc88-6487-4321-a822-ad00ccb446fd container projected-configmap-volume-test: 
STEP: delete the pod
Nov 22 23:39:09.431: INFO: Waiting for pod pod-projected-configmaps-70aadc88-6487-4321-a822-ad00ccb446fd to disappear
Nov 22 23:39:09.452: INFO: Pod pod-projected-configmaps-70aadc88-6487-4321-a822-ad00ccb446fd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:39:09.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2643" for this suite.
Nov 22 23:39:15.468: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:39:15.545: INFO: namespace projected-2643 deletion completed in 6.089104771s

• [SLOW TEST:10.245 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:39:15.546: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
STEP: reading a file in the container
Nov 22 23:39:20.164: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4224 pod-service-account-5ef4a3e5-93f5-4a83-8499-88c4e101df9a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Nov 22 23:39:20.338: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4224 pod-service-account-5ef4a3e5-93f5-4a83-8499-88c4e101df9a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Nov 22 23:39:20.570: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4224 pod-service-account-5ef4a3e5-93f5-4a83-8499-88c4e101df9a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:39:20.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4224" for this suite.
Nov 22 23:39:26.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:39:26.869: INFO: namespace svcaccounts-4224 deletion completed in 6.093151353s

• [SLOW TEST:11.323 seconds]
[sig-auth] ServiceAccounts
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:39:26.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-93ce40ba-b61a-4c69-8a32-538e8286e912
STEP: Creating a pod to test consume configMaps
Nov 22 23:39:26.928: INFO: Waiting up to 5m0s for pod "pod-configmaps-5b7899d6-df70-41c9-a7c1-7c19c7833c30" in namespace "configmap-5274" to be "success or failure"
Nov 22 23:39:26.984: INFO: Pod "pod-configmaps-5b7899d6-df70-41c9-a7c1-7c19c7833c30": Phase="Pending", Reason="", readiness=false. Elapsed: 56.077415ms
Nov 22 23:39:29.020: INFO: Pod "pod-configmaps-5b7899d6-df70-41c9-a7c1-7c19c7833c30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092135129s
Nov 22 23:39:31.024: INFO: Pod "pod-configmaps-5b7899d6-df70-41c9-a7c1-7c19c7833c30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096030284s
STEP: Saw pod success
Nov 22 23:39:31.024: INFO: Pod "pod-configmaps-5b7899d6-df70-41c9-a7c1-7c19c7833c30" satisfied condition "success or failure"
Nov 22 23:39:31.027: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-5b7899d6-df70-41c9-a7c1-7c19c7833c30 container configmap-volume-test: 
STEP: delete the pod
Nov 22 23:39:31.049: INFO: Waiting for pod pod-configmaps-5b7899d6-df70-41c9-a7c1-7c19c7833c30 to disappear
Nov 22 23:39:31.053: INFO: Pod pod-configmaps-5b7899d6-df70-41c9-a7c1-7c19c7833c30 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:39:31.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5274" for this suite.
Nov 22 23:39:37.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:39:37.157: INFO: namespace configmap-5274 deletion completed in 6.10025944s

• [SLOW TEST:10.288 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:39:37.158: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Nov 22 23:39:41.465: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:39:41.719: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1235" for this suite.
Nov 22 23:39:47.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:39:48.017: INFO: namespace container-runtime-1235 deletion completed in 6.29273262s

• [SLOW TEST:10.859 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:39:48.017: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Nov 22 23:39:48.121: INFO: Waiting up to 5m0s for pod "downward-api-4a7d2f2c-32b3-4820-9c8f-5baae23299cf" in namespace "downward-api-1984" to be "success or failure"
Nov 22 23:39:48.142: INFO: Pod "downward-api-4a7d2f2c-32b3-4820-9c8f-5baae23299cf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.663403ms
Nov 22 23:39:50.344: INFO: Pod "downward-api-4a7d2f2c-32b3-4820-9c8f-5baae23299cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.223006837s
Nov 22 23:39:52.349: INFO: Pod "downward-api-4a7d2f2c-32b3-4820-9c8f-5baae23299cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.227610773s
STEP: Saw pod success
Nov 22 23:39:52.349: INFO: Pod "downward-api-4a7d2f2c-32b3-4820-9c8f-5baae23299cf" satisfied condition "success or failure"
Nov 22 23:39:52.352: INFO: Trying to get logs from node iruya-worker2 pod downward-api-4a7d2f2c-32b3-4820-9c8f-5baae23299cf container dapi-container: 
STEP: delete the pod
Nov 22 23:39:52.580: INFO: Waiting for pod downward-api-4a7d2f2c-32b3-4820-9c8f-5baae23299cf to disappear
Nov 22 23:39:52.627: INFO: Pod downward-api-4a7d2f2c-32b3-4820-9c8f-5baae23299cf no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:39:52.627: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1984" for this suite.
Nov 22 23:39:58.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:39:58.782: INFO: namespace downward-api-1984 deletion completed in 6.150753536s

• [SLOW TEST:10.765 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:39:58.782: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5172.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5172.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Nov 22 23:40:04.928: INFO: DNS probes using dns-5172/dns-test-89b7a313-fe06-4c6c-a2dd-07a67e4c5c3b succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:40:04.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5172" for this suite.
Nov 22 23:40:11.070: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:40:11.145: INFO: namespace dns-5172 deletion completed in 6.149202687s

• [SLOW TEST:12.362 seconds]
[sig-network] DNS
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:40:11.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Nov 22 23:40:11.239: INFO: Waiting up to 5m0s for pod "var-expansion-6d646101-66aa-4474-a948-1488d58bbd7f" in namespace "var-expansion-7758" to be "success or failure"
Nov 22 23:40:11.246: INFO: Pod "var-expansion-6d646101-66aa-4474-a948-1488d58bbd7f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.31024ms
Nov 22 23:40:13.250: INFO: Pod "var-expansion-6d646101-66aa-4474-a948-1488d58bbd7f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011011522s
Nov 22 23:40:15.253: INFO: Pod "var-expansion-6d646101-66aa-4474-a948-1488d58bbd7f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014447352s
STEP: Saw pod success
Nov 22 23:40:15.253: INFO: Pod "var-expansion-6d646101-66aa-4474-a948-1488d58bbd7f" satisfied condition "success or failure"
Nov 22 23:40:15.255: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-6d646101-66aa-4474-a948-1488d58bbd7f container dapi-container: 
STEP: delete the pod
Nov 22 23:40:15.278: INFO: Waiting for pod var-expansion-6d646101-66aa-4474-a948-1488d58bbd7f to disappear
Nov 22 23:40:15.288: INFO: Pod var-expansion-6d646101-66aa-4474-a948-1488d58bbd7f no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:40:15.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7758" for this suite.
Nov 22 23:40:21.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:40:21.434: INFO: namespace var-expansion-7758 deletion completed in 6.142956759s

• [SLOW TEST:10.289 seconds]
[k8s.io] Variable Expansion
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:40:21.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name projected-secret-test-2cffb69b-1f8c-4981-aa24-e275a8df263f
STEP: Creating a pod to test consume secrets
Nov 22 23:40:21.514: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-dbfc745a-d884-499a-bd71-95fdb1663b7b" in namespace "projected-6007" to be "success or failure"
Nov 22 23:40:21.540: INFO: Pod "pod-projected-secrets-dbfc745a-d884-499a-bd71-95fdb1663b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.889001ms
Nov 22 23:40:23.547: INFO: Pod "pod-projected-secrets-dbfc745a-d884-499a-bd71-95fdb1663b7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033142404s
Nov 22 23:40:25.551: INFO: Pod "pod-projected-secrets-dbfc745a-d884-499a-bd71-95fdb1663b7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036670052s
STEP: Saw pod success
Nov 22 23:40:25.551: INFO: Pod "pod-projected-secrets-dbfc745a-d884-499a-bd71-95fdb1663b7b" satisfied condition "success or failure"
Nov 22 23:40:25.554: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-dbfc745a-d884-499a-bd71-95fdb1663b7b container secret-volume-test: 
STEP: delete the pod
Nov 22 23:40:25.575: INFO: Waiting for pod pod-projected-secrets-dbfc745a-d884-499a-bd71-95fdb1663b7b to disappear
Nov 22 23:40:25.592: INFO: Pod pod-projected-secrets-dbfc745a-d884-499a-bd71-95fdb1663b7b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:40:25.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6007" for this suite.
Nov 22 23:40:31.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:40:31.675: INFO: namespace projected-6007 deletion completed in 6.0803278s

• [SLOW TEST:10.240 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:40:31.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 22 23:40:31.755: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Nov 22 23:40:33.793: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:40:34.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8906" for this suite.
Nov 22 23:40:42.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:40:42.982: INFO: namespace replication-controller-8906 deletion completed in 8.131127678s

• [SLOW TEST:11.306 seconds]
[sig-apps] ReplicationController
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:40:42.982: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-6f2e8571-fd10-4ff2-a283-77218436ebeb
STEP: Creating secret with name secret-projected-all-test-volume-6b7ba54f-e4f4-4fcc-9add-701255011dcc
STEP: Creating a pod to test Check all projections for projected volume plugin
Nov 22 23:40:43.057: INFO: Waiting up to 5m0s for pod "projected-volume-3c175983-c407-4e30-8ee1-240286bedbd5" in namespace "projected-8704" to be "success or failure"
Nov 22 23:40:43.061: INFO: Pod "projected-volume-3c175983-c407-4e30-8ee1-240286bedbd5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.368839ms
Nov 22 23:40:45.081: INFO: Pod "projected-volume-3c175983-c407-4e30-8ee1-240286bedbd5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023799739s
Nov 22 23:40:47.093: INFO: Pod "projected-volume-3c175983-c407-4e30-8ee1-240286bedbd5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035900438s
STEP: Saw pod success
Nov 22 23:40:47.093: INFO: Pod "projected-volume-3c175983-c407-4e30-8ee1-240286bedbd5" satisfied condition "success or failure"
Nov 22 23:40:47.097: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-3c175983-c407-4e30-8ee1-240286bedbd5 container projected-all-volume-test: 
STEP: delete the pod
Nov 22 23:40:47.133: INFO: Waiting for pod projected-volume-3c175983-c407-4e30-8ee1-240286bedbd5 to disappear
Nov 22 23:40:47.145: INFO: Pod projected-volume-3c175983-c407-4e30-8ee1-240286bedbd5 no longer exists
[AfterEach] [sig-storage] Projected combined
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:40:47.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8704" for this suite.
Nov 22 23:40:53.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:40:53.257: INFO: namespace projected-8704 deletion completed in 6.10844846s

• [SLOW TEST:10.275 seconds]
[sig-storage] Projected combined
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:40:53.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 22 23:40:53.371: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/: 
alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/

alternatives.log
containers/
>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 22 23:40:59.607: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Nov 22 23:40:59.621: INFO: Number of nodes with available pods: 0
Nov 22 23:40:59.621: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Nov 22 23:40:59.669: INFO: Number of nodes with available pods: 0
Nov 22 23:40:59.669: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:00.673: INFO: Number of nodes with available pods: 0
Nov 22 23:41:00.673: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:01.678: INFO: Number of nodes with available pods: 0
Nov 22 23:41:01.678: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:02.673: INFO: Number of nodes with available pods: 1
Nov 22 23:41:02.673: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Nov 22 23:41:02.700: INFO: Number of nodes with available pods: 1
Nov 22 23:41:02.700: INFO: Number of running nodes: 0, number of available pods: 1
Nov 22 23:41:03.705: INFO: Number of nodes with available pods: 0
Nov 22 23:41:03.705: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Nov 22 23:41:03.719: INFO: Number of nodes with available pods: 0
Nov 22 23:41:03.719: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:04.722: INFO: Number of nodes with available pods: 0
Nov 22 23:41:04.722: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:05.723: INFO: Number of nodes with available pods: 0
Nov 22 23:41:05.723: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:06.722: INFO: Number of nodes with available pods: 0
Nov 22 23:41:06.723: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:07.723: INFO: Number of nodes with available pods: 0
Nov 22 23:41:07.723: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:08.723: INFO: Number of nodes with available pods: 0
Nov 22 23:41:08.723: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:09.723: INFO: Number of nodes with available pods: 0
Nov 22 23:41:09.723: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:10.723: INFO: Number of nodes with available pods: 0
Nov 22 23:41:10.723: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:11.723: INFO: Number of nodes with available pods: 0
Nov 22 23:41:11.723: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:12.723: INFO: Number of nodes with available pods: 0
Nov 22 23:41:12.723: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:13.723: INFO: Number of nodes with available pods: 0
Nov 22 23:41:13.723: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:14.723: INFO: Number of nodes with available pods: 0
Nov 22 23:41:14.723: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:15.723: INFO: Number of nodes with available pods: 0
Nov 22 23:41:15.723: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:16.722: INFO: Number of nodes with available pods: 0
Nov 22 23:41:16.722: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:17.723: INFO: Number of nodes with available pods: 0
Nov 22 23:41:17.723: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:41:18.723: INFO: Number of nodes with available pods: 1
Nov 22 23:41:18.723: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6252, will wait for the garbage collector to delete the pods
Nov 22 23:41:18.787: INFO: Deleting DaemonSet.extensions daemon-set took: 6.392725ms
Nov 22 23:41:19.088: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.27094ms
Nov 22 23:41:25.391: INFO: Number of nodes with available pods: 0
Nov 22 23:41:25.391: INFO: Number of running nodes: 0, number of available pods: 0
Nov 22 23:41:25.393: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6252/daemonsets","resourceVersion":"10992361"},"items":null}

Nov 22 23:41:25.395: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6252/pods","resourceVersion":"10992361"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:41:25.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6252" for this suite.
Nov 22 23:41:31.686: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:41:31.854: INFO: namespace daemonsets-6252 deletion completed in 6.430215766s

• [SLOW TEST:32.316 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:41:31.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 22 23:41:31.911: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1819de1d-ef17-4b09-9cfe-92675cfdc3fd" in namespace "downward-api-6482" to be "success or failure"
Nov 22 23:41:31.937: INFO: Pod "downwardapi-volume-1819de1d-ef17-4b09-9cfe-92675cfdc3fd": Phase="Pending", Reason="", readiness=false. Elapsed: 25.966809ms
Nov 22 23:41:33.974: INFO: Pod "downwardapi-volume-1819de1d-ef17-4b09-9cfe-92675cfdc3fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06251034s
Nov 22 23:41:35.978: INFO: Pod "downwardapi-volume-1819de1d-ef17-4b09-9cfe-92675cfdc3fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.066991341s
STEP: Saw pod success
Nov 22 23:41:35.978: INFO: Pod "downwardapi-volume-1819de1d-ef17-4b09-9cfe-92675cfdc3fd" satisfied condition "success or failure"
Nov 22 23:41:35.982: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-1819de1d-ef17-4b09-9cfe-92675cfdc3fd container client-container: 
STEP: delete the pod
Nov 22 23:41:36.002: INFO: Waiting for pod downwardapi-volume-1819de1d-ef17-4b09-9cfe-92675cfdc3fd to disappear
Nov 22 23:41:36.006: INFO: Pod downwardapi-volume-1819de1d-ef17-4b09-9cfe-92675cfdc3fd no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:41:36.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6482" for this suite.
Nov 22 23:41:42.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:41:42.107: INFO: namespace downward-api-6482 deletion completed in 6.098409329s

• [SLOW TEST:10.253 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:41:42.108: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Nov 22 23:41:42.201: INFO: Waiting up to 5m0s for pod "downward-api-0cb769a1-0cda-45d4-ad3f-e2dd942ed814" in namespace "downward-api-7224" to be "success or failure"
Nov 22 23:41:42.224: INFO: Pod "downward-api-0cb769a1-0cda-45d4-ad3f-e2dd942ed814": Phase="Pending", Reason="", readiness=false. Elapsed: 22.849343ms
Nov 22 23:41:44.245: INFO: Pod "downward-api-0cb769a1-0cda-45d4-ad3f-e2dd942ed814": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044144272s
Nov 22 23:41:46.249: INFO: Pod "downward-api-0cb769a1-0cda-45d4-ad3f-e2dd942ed814": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048699025s
STEP: Saw pod success
Nov 22 23:41:46.250: INFO: Pod "downward-api-0cb769a1-0cda-45d4-ad3f-e2dd942ed814" satisfied condition "success or failure"
Nov 22 23:41:46.253: INFO: Trying to get logs from node iruya-worker2 pod downward-api-0cb769a1-0cda-45d4-ad3f-e2dd942ed814 container dapi-container: 
STEP: delete the pod
Nov 22 23:41:46.280: INFO: Waiting for pod downward-api-0cb769a1-0cda-45d4-ad3f-e2dd942ed814 to disappear
Nov 22 23:41:46.284: INFO: Pod downward-api-0cb769a1-0cda-45d4-ad3f-e2dd942ed814 no longer exists
[AfterEach] [sig-node] Downward API
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:41:46.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7224" for this suite.
Nov 22 23:41:52.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:41:52.390: INFO: namespace downward-api-7224 deletion completed in 6.102911187s

• [SLOW TEST:10.283 seconds]
[sig-node] Downward API
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:41:52.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-2605/configmap-test-2a8b7076-5cb0-4ce9-8c62-9c06256d3dfb
STEP: Creating a pod to test consume configMaps
Nov 22 23:41:52.473: INFO: Waiting up to 5m0s for pod "pod-configmaps-ab2e6747-e8f4-4fec-8879-e7a0c54d1ed4" in namespace "configmap-2605" to be "success or failure"
Nov 22 23:41:52.482: INFO: Pod "pod-configmaps-ab2e6747-e8f4-4fec-8879-e7a0c54d1ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.31612ms
Nov 22 23:41:54.486: INFO: Pod "pod-configmaps-ab2e6747-e8f4-4fec-8879-e7a0c54d1ed4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013899328s
Nov 22 23:41:56.490: INFO: Pod "pod-configmaps-ab2e6747-e8f4-4fec-8879-e7a0c54d1ed4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017393087s
STEP: Saw pod success
Nov 22 23:41:56.490: INFO: Pod "pod-configmaps-ab2e6747-e8f4-4fec-8879-e7a0c54d1ed4" satisfied condition "success or failure"
Nov 22 23:41:56.493: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-ab2e6747-e8f4-4fec-8879-e7a0c54d1ed4 container env-test: 
STEP: delete the pod
Nov 22 23:41:56.528: INFO: Waiting for pod pod-configmaps-ab2e6747-e8f4-4fec-8879-e7a0c54d1ed4 to disappear
Nov 22 23:41:56.542: INFO: Pod pod-configmaps-ab2e6747-e8f4-4fec-8879-e7a0c54d1ed4 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:41:56.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2605" for this suite.
Nov 22 23:42:02.557: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:42:02.628: INFO: namespace configmap-2605 deletion completed in 6.083251607s

• [SLOW TEST:10.238 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:42:02.629: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Nov 22 23:42:05.833: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:42:05.885: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-757" for this suite.
Nov 22 23:42:11.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:42:12.010: INFO: namespace container-runtime-757 deletion completed in 6.109429465s

• [SLOW TEST:9.381 seconds]
[k8s.io] Container Runtime
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:42:12.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Nov 22 23:42:12.106: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 22 23:42:12.108: INFO: Number of nodes with available pods: 0
Nov 22 23:42:12.108: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:42:13.114: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 22 23:42:13.118: INFO: Number of nodes with available pods: 0
Nov 22 23:42:13.118: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:42:14.312: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 22 23:42:14.325: INFO: Number of nodes with available pods: 0
Nov 22 23:42:14.325: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:42:15.113: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 22 23:42:15.117: INFO: Number of nodes with available pods: 0
Nov 22 23:42:15.117: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:42:16.120: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 22 23:42:16.123: INFO: Number of nodes with available pods: 0
Nov 22 23:42:16.123: INFO: Node iruya-worker is running more than one daemon pod
Nov 22 23:42:17.114: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 22 23:42:17.117: INFO: Number of nodes with available pods: 2
Nov 22 23:42:17.117: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Nov 22 23:42:17.140: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 22 23:42:17.143: INFO: Number of nodes with available pods: 1
Nov 22 23:42:17.143: INFO: Node iruya-worker2 is running more than one daemon pod
Nov 22 23:42:18.147: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 22 23:42:18.151: INFO: Number of nodes with available pods: 1
Nov 22 23:42:18.151: INFO: Node iruya-worker2 is running more than one daemon pod
Nov 22 23:42:19.148: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 22 23:42:19.151: INFO: Number of nodes with available pods: 1
Nov 22 23:42:19.151: INFO: Node iruya-worker2 is running more than one daemon pod
Nov 22 23:42:20.149: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 22 23:42:20.152: INFO: Number of nodes with available pods: 1
Nov 22 23:42:20.152: INFO: Node iruya-worker2 is running more than one daemon pod
Nov 22 23:42:21.149: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 22 23:42:21.153: INFO: Number of nodes with available pods: 1
Nov 22 23:42:21.153: INFO: Node iruya-worker2 is running more than one daemon pod
Nov 22 23:42:22.148: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 22 23:42:22.151: INFO: Number of nodes with available pods: 1
Nov 22 23:42:22.152: INFO: Node iruya-worker2 is running more than one daemon pod
Nov 22 23:42:23.149: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 22 23:42:23.152: INFO: Number of nodes with available pods: 1
Nov 22 23:42:23.152: INFO: Node iruya-worker2 is running more than one daemon pod
Nov 22 23:42:24.148: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node
Nov 22 23:42:24.151: INFO: Number of nodes with available pods: 2
Nov 22 23:42:24.151: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8556, will wait for the garbage collector to delete the pods
Nov 22 23:42:24.222: INFO: Deleting DaemonSet.extensions daemon-set took: 15.502318ms
Nov 22 23:42:24.522: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.277509ms
Nov 22 23:42:35.726: INFO: Number of nodes with available pods: 0
Nov 22 23:42:35.726: INFO: Number of running nodes: 0, number of available pods: 0
Nov 22 23:42:35.730: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8556/daemonsets","resourceVersion":"10992667"},"items":null}

Nov 22 23:42:35.732: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8556/pods","resourceVersion":"10992667"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:42:35.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8556" for this suite.
Nov 22 23:42:41.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:42:41.825: INFO: namespace daemonsets-8556 deletion completed in 6.081625348s

• [SLOW TEST:29.815 seconds]
[sig-apps] Daemon set [Serial]
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:42:41.825: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-8f5f460e-4f0c-4ee1-871e-23cb93a87405
STEP: Creating configMap with name cm-test-opt-upd-896581f5-6cac-435b-b381-6a421a268125
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-8f5f460e-4f0c-4ee1-871e-23cb93a87405
STEP: Updating configmap cm-test-opt-upd-896581f5-6cac-435b-b381-6a421a268125
STEP: Creating configMap with name cm-test-opt-create-ae6a2881-79a6-4b46-996f-b68390970bde
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:42:52.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7126" for this suite.
Nov 22 23:43:14.121: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:43:14.205: INFO: namespace projected-7126 deletion completed in 22.110800675s

• [SLOW TEST:32.380 seconds]
[sig-storage] Projected configMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:43:14.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Nov 22 23:43:14.271: INFO: Waiting up to 5m0s for pod "client-containers-d5e7ba02-0a29-442d-adbc-bbac519b0569" in namespace "containers-2138" to be "success or failure"
Nov 22 23:43:14.274: INFO: Pod "client-containers-d5e7ba02-0a29-442d-adbc-bbac519b0569": Phase="Pending", Reason="", readiness=false. Elapsed: 2.824764ms
Nov 22 23:43:16.352: INFO: Pod "client-containers-d5e7ba02-0a29-442d-adbc-bbac519b0569": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080129784s
Nov 22 23:43:18.356: INFO: Pod "client-containers-d5e7ba02-0a29-442d-adbc-bbac519b0569": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.084647449s
STEP: Saw pod success
Nov 22 23:43:18.356: INFO: Pod "client-containers-d5e7ba02-0a29-442d-adbc-bbac519b0569" satisfied condition "success or failure"
Nov 22 23:43:18.360: INFO: Trying to get logs from node iruya-worker pod client-containers-d5e7ba02-0a29-442d-adbc-bbac519b0569 container test-container: 
STEP: delete the pod
Nov 22 23:43:18.398: INFO: Waiting for pod client-containers-d5e7ba02-0a29-442d-adbc-bbac519b0569 to disappear
Nov 22 23:43:18.412: INFO: Pod client-containers-d5e7ba02-0a29-442d-adbc-bbac519b0569 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:43:18.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2138" for this suite.
Nov 22 23:43:24.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:43:24.528: INFO: namespace containers-2138 deletion completed in 6.112487177s

• [SLOW TEST:10.323 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:43:24.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-f9bxt in namespace proxy-4889
I1122 23:43:24.645064       6 runners.go:180] Created replication controller with name: proxy-service-f9bxt, namespace: proxy-4889, replica count: 1
I1122 23:43:25.695597       6 runners.go:180] proxy-service-f9bxt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1122 23:43:26.695826       6 runners.go:180] proxy-service-f9bxt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1122 23:43:27.696086       6 runners.go:180] proxy-service-f9bxt Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1122 23:43:28.696282       6 runners.go:180] proxy-service-f9bxt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1122 23:43:29.696490       6 runners.go:180] proxy-service-f9bxt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1122 23:43:30.696690       6 runners.go:180] proxy-service-f9bxt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1122 23:43:31.697080       6 runners.go:180] proxy-service-f9bxt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1122 23:43:32.697346       6 runners.go:180] proxy-service-f9bxt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1122 23:43:33.697660       6 runners.go:180] proxy-service-f9bxt Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1122 23:43:34.697910       6 runners.go:180] proxy-service-f9bxt Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Nov 22 23:43:34.708: INFO: setup took 10.132460526s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Nov 22 23:43:34.715: INFO: (0) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:160/proxy/: foo (200; 6.432728ms)
Nov 22 23:43:34.716: INFO: (0) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 6.791658ms)
Nov 22 23:43:34.717: INFO: (0) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:160/proxy/: foo (200; 7.981905ms)
Nov 22 23:43:34.721: INFO: (0) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 12.303808ms)
Nov 22 23:43:34.721: INFO: (0) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:1080/proxy/: test<... (200; 12.339112ms)
Nov 22 23:43:34.721: INFO: (0) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:162/proxy/: bar (200; 12.462581ms)
Nov 22 23:43:34.721: INFO: (0) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname1/proxy/: foo (200; 12.333065ms)
Nov 22 23:43:34.721: INFO: (0) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname2/proxy/: bar (200; 12.433897ms)
Nov 22 23:43:34.721: INFO: (0) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:1080/proxy/: ... (200; 12.382666ms)
Nov 22 23:43:34.721: INFO: (0) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname1/proxy/: foo (200; 12.320531ms)
Nov 22 23:43:34.721: INFO: (0) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname2/proxy/: bar (200; 12.3194ms)
Nov 22 23:43:34.722: INFO: (0) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:460/proxy/: tls baz (200; 12.950496ms)
Nov 22 23:43:34.726: INFO: (0) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:462/proxy/: tls qux (200; 17.085214ms)
Nov 22 23:43:34.726: INFO: (0) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: ... (200; 4.926138ms)
Nov 22 23:43:34.734: INFO: (1) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:160/proxy/: foo (200; 5.217321ms)
Nov 22 23:43:34.734: INFO: (1) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:460/proxy/: tls baz (200; 5.365383ms)
Nov 22 23:43:34.734: INFO: (1) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 5.503627ms)
Nov 22 23:43:34.734: INFO: (1) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: test<... (200; 6.18817ms)
Nov 22 23:43:34.735: INFO: (1) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 6.218442ms)
Nov 22 23:43:34.736: INFO: (1) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname2/proxy/: bar (200; 8.126041ms)
Nov 22 23:43:34.736: INFO: (1) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname1/proxy/: foo (200; 8.199724ms)
Nov 22 23:43:34.736: INFO: (1) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname2/proxy/: bar (200; 8.024264ms)
Nov 22 23:43:34.736: INFO: (1) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname1/proxy/: foo (200; 8.064246ms)
Nov 22 23:43:34.736: INFO: (1) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname2/proxy/: tls qux (200; 8.249291ms)
Nov 22 23:43:34.737: INFO: (1) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname1/proxy/: tls baz (200; 9.040003ms)
Nov 22 23:43:34.741: INFO: (2) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: ... (200; 4.1642ms)
Nov 22 23:43:34.742: INFO: (2) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:1080/proxy/: test<... (200; 4.173855ms)
Nov 22 23:43:34.742: INFO: (2) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname2/proxy/: bar (200; 4.316316ms)
Nov 22 23:43:34.742: INFO: (2) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 4.873147ms)
Nov 22 23:43:34.743: INFO: (2) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname2/proxy/: tls qux (200; 5.039308ms)
Nov 22 23:43:34.743: INFO: (2) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 5.016221ms)
Nov 22 23:43:34.743: INFO: (2) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname1/proxy/: foo (200; 5.011966ms)
Nov 22 23:43:34.743: INFO: (2) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname1/proxy/: tls baz (200; 4.949138ms)
Nov 22 23:43:34.746: INFO: (3) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:160/proxy/: foo (200; 3.401426ms)
Nov 22 23:43:34.746: INFO: (3) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:1080/proxy/: ... (200; 3.417929ms)
Nov 22 23:43:34.746: INFO: (3) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 3.594281ms)
Nov 22 23:43:34.746: INFO: (3) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: test<... (200; 4.092181ms)
Nov 22 23:43:34.748: INFO: (3) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname2/proxy/: bar (200; 4.773231ms)
Nov 22 23:43:34.748: INFO: (3) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname2/proxy/: tls qux (200; 4.811205ms)
Nov 22 23:43:34.748: INFO: (3) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname2/proxy/: bar (200; 4.894906ms)
Nov 22 23:43:34.748: INFO: (3) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname1/proxy/: foo (200; 4.941782ms)
Nov 22 23:43:34.748: INFO: (3) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:462/proxy/: tls qux (200; 5.065715ms)
Nov 22 23:43:34.748: INFO: (3) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname1/proxy/: tls baz (200; 5.005336ms)
Nov 22 23:43:34.752: INFO: (4) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 3.714297ms)
Nov 22 23:43:34.752: INFO: (4) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 3.79012ms)
Nov 22 23:43:34.752: INFO: (4) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:162/proxy/: bar (200; 3.98462ms)
Nov 22 23:43:34.752: INFO: (4) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:160/proxy/: foo (200; 4.389755ms)
Nov 22 23:43:34.752: INFO: (4) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: ... (200; 4.597744ms)
Nov 22 23:43:34.753: INFO: (4) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname1/proxy/: tls baz (200; 4.688031ms)
Nov 22 23:43:34.753: INFO: (4) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:462/proxy/: tls qux (200; 4.625856ms)
Nov 22 23:43:34.753: INFO: (4) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:1080/proxy/: test<... (200; 4.683615ms)
Nov 22 23:43:34.753: INFO: (4) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname2/proxy/: tls qux (200; 4.726436ms)
Nov 22 23:43:34.753: INFO: (4) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname1/proxy/: foo (200; 4.715486ms)
Nov 22 23:43:34.753: INFO: (4) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname2/proxy/: bar (200; 4.927613ms)
Nov 22 23:43:34.753: INFO: (4) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname1/proxy/: foo (200; 5.142045ms)
Nov 22 23:43:34.753: INFO: (4) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname2/proxy/: bar (200; 5.143802ms)
Nov 22 23:43:34.756: INFO: (5) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 2.638833ms)
Nov 22 23:43:34.756: INFO: (5) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:160/proxy/: foo (200; 2.777769ms)
Nov 22 23:43:34.756: INFO: (5) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:162/proxy/: bar (200; 3.149068ms)
Nov 22 23:43:34.757: INFO: (5) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:462/proxy/: tls qux (200; 3.904291ms)
Nov 22 23:43:34.757: INFO: (5) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:160/proxy/: foo (200; 4.129251ms)
Nov 22 23:43:34.758: INFO: (5) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 4.314363ms)
Nov 22 23:43:34.758: INFO: (5) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:1080/proxy/: test<... (200; 4.421111ms)
Nov 22 23:43:34.758: INFO: (5) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:1080/proxy/: ... (200; 4.587361ms)
Nov 22 23:43:34.758: INFO: (5) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname1/proxy/: foo (200; 4.881262ms)
Nov 22 23:43:34.758: INFO: (5) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname2/proxy/: tls qux (200; 5.029088ms)
Nov 22 23:43:34.758: INFO: (5) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: test (200; 24.415698ms)
Nov 22 23:43:34.783: INFO: (6) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:462/proxy/: tls qux (200; 24.434849ms)
Nov 22 23:43:34.784: INFO: (6) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:460/proxy/: tls baz (200; 24.435091ms)
Nov 22 23:43:34.784: INFO: (6) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:160/proxy/: foo (200; 24.439408ms)
Nov 22 23:43:34.784: INFO: (6) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:162/proxy/: bar (200; 24.783408ms)
Nov 22 23:43:34.784: INFO: (6) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:1080/proxy/: test<... (200; 24.45095ms)
Nov 22 23:43:34.784: INFO: (6) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 24.686149ms)
Nov 22 23:43:34.784: INFO: (6) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:1080/proxy/: ... (200; 24.755766ms)
Nov 22 23:43:34.784: INFO: (6) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: test<... (200; 4.295452ms)
Nov 22 23:43:34.790: INFO: (7) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 4.629116ms)
Nov 22 23:43:34.790: INFO: (7) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: ... (200; 7.707719ms)
Nov 22 23:43:34.793: INFO: (7) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:160/proxy/: foo (200; 7.521518ms)
Nov 22 23:43:34.793: INFO: (7) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname2/proxy/: bar (200; 7.472177ms)
Nov 22 23:43:34.793: INFO: (7) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 7.698679ms)
Nov 22 23:43:34.793: INFO: (7) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname2/proxy/: tls qux (200; 7.743419ms)
Nov 22 23:43:34.793: INFO: (7) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname2/proxy/: bar (200; 7.638605ms)
Nov 22 23:43:34.793: INFO: (7) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname1/proxy/: foo (200; 7.661526ms)
Nov 22 23:43:34.793: INFO: (7) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname1/proxy/: tls baz (200; 7.561746ms)
Nov 22 23:43:34.796: INFO: (8) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:1080/proxy/: ... (200; 2.21622ms)
Nov 22 23:43:34.797: INFO: (8) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: test<... (200; 4.505681ms)
Nov 22 23:43:34.798: INFO: (8) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 4.555761ms)
Nov 22 23:43:34.798: INFO: (8) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:162/proxy/: bar (200; 4.596553ms)
Nov 22 23:43:34.798: INFO: (8) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:462/proxy/: tls qux (200; 4.750583ms)
Nov 22 23:43:34.798: INFO: (8) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname2/proxy/: tls qux (200; 4.898455ms)
Nov 22 23:43:34.798: INFO: (8) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:160/proxy/: foo (200; 5.078135ms)
Nov 22 23:43:34.799: INFO: (8) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 5.382151ms)
Nov 22 23:43:34.799: INFO: (8) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:160/proxy/: foo (200; 5.452028ms)
Nov 22 23:43:34.799: INFO: (8) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname1/proxy/: foo (200; 5.582087ms)
Nov 22 23:43:34.799: INFO: (8) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname2/proxy/: bar (200; 5.614029ms)
Nov 22 23:43:34.799: INFO: (8) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname2/proxy/: bar (200; 5.664578ms)
Nov 22 23:43:34.799: INFO: (8) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname1/proxy/: tls baz (200; 5.764016ms)
Nov 22 23:43:34.799: INFO: (8) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname1/proxy/: foo (200; 5.775289ms)
Nov 22 23:43:34.803: INFO: (9) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:162/proxy/: bar (200; 3.728474ms)
Nov 22 23:43:34.803: INFO: (9) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:460/proxy/: tls baz (200; 3.788133ms)
Nov 22 23:43:34.803: INFO: (9) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:1080/proxy/: ... (200; 3.871214ms)
Nov 22 23:43:34.803: INFO: (9) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:1080/proxy/: test<... (200; 3.962932ms)
Nov 22 23:43:34.803: INFO: (9) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:160/proxy/: foo (200; 4.093424ms)
Nov 22 23:43:34.803: INFO: (9) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:160/proxy/: foo (200; 4.088156ms)
Nov 22 23:43:34.803: INFO: (9) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 4.038038ms)
Nov 22 23:43:34.803: INFO: (9) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: test (200; 3.964365ms)
Nov 22 23:43:34.809: INFO: (10) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: ... (200; 3.99337ms)
Nov 22 23:43:34.809: INFO: (10) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname1/proxy/: foo (200; 4.181303ms)
Nov 22 23:43:34.810: INFO: (10) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname2/proxy/: tls qux (200; 4.28923ms)
Nov 22 23:43:34.810: INFO: (10) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 4.516536ms)
Nov 22 23:43:34.810: INFO: (10) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname1/proxy/: foo (200; 4.640602ms)
Nov 22 23:43:34.810: INFO: (10) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname1/proxy/: tls baz (200; 4.722622ms)
Nov 22 23:43:34.810: INFO: (10) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname2/proxy/: bar (200; 4.71299ms)
Nov 22 23:43:34.810: INFO: (10) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:1080/proxy/: test<... (200; 4.72499ms)
Nov 22 23:43:34.810: INFO: (10) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname2/proxy/: bar (200; 4.908045ms)
Nov 22 23:43:34.813: INFO: (11) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:1080/proxy/: test<... (200; 3.271722ms)
Nov 22 23:43:34.814: INFO: (11) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 3.311435ms)
Nov 22 23:43:34.814: INFO: (11) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:160/proxy/: foo (200; 3.356224ms)
Nov 22 23:43:34.814: INFO: (11) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:1080/proxy/: ... (200; 3.440293ms)
Nov 22 23:43:34.814: INFO: (11) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:462/proxy/: tls qux (200; 3.385885ms)
Nov 22 23:43:34.814: INFO: (11) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: test (200; 3.510986ms)
Nov 22 23:43:34.819: INFO: (12) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 3.65187ms)
Nov 22 23:43:34.819: INFO: (12) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname2/proxy/: bar (200; 3.907689ms)
Nov 22 23:43:34.819: INFO: (12) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:1080/proxy/: ... (200; 3.894232ms)
Nov 22 23:43:34.819: INFO: (12) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:460/proxy/: tls baz (200; 3.866375ms)
Nov 22 23:43:34.820: INFO: (12) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:1080/proxy/: test<... (200; 4.554626ms)
Nov 22 23:43:34.820: INFO: (12) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname2/proxy/: tls qux (200; 4.653431ms)
Nov 22 23:43:34.820: INFO: (12) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname1/proxy/: foo (200; 4.739438ms)
Nov 22 23:43:34.820: INFO: (12) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname1/proxy/: foo (200; 4.64231ms)
Nov 22 23:43:34.821: INFO: (12) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:160/proxy/: foo (200; 4.878379ms)
Nov 22 23:43:34.821: INFO: (12) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: test (200; 3.075923ms)
Nov 22 23:43:34.824: INFO: (13) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:1080/proxy/: test<... (200; 3.132701ms)
Nov 22 23:43:34.824: INFO: (13) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:1080/proxy/: ... (200; 3.102553ms)
Nov 22 23:43:34.825: INFO: (13) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 3.977605ms)
Nov 22 23:43:34.825: INFO: (13) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: test<... (200; 2.580288ms)
Nov 22 23:43:34.830: INFO: (14) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 3.201215ms)
Nov 22 23:43:34.830: INFO: (14) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 3.231179ms)
Nov 22 23:43:34.830: INFO: (14) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:160/proxy/: foo (200; 3.321898ms)
Nov 22 23:43:34.830: INFO: (14) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:462/proxy/: tls qux (200; 3.297553ms)
Nov 22 23:43:34.830: INFO: (14) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:1080/proxy/: ... (200; 3.2853ms)
Nov 22 23:43:34.830: INFO: (14) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:162/proxy/: bar (200; 3.386881ms)
Nov 22 23:43:34.830: INFO: (14) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:160/proxy/: foo (200; 3.481605ms)
Nov 22 23:43:34.830: INFO: (14) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:460/proxy/: tls baz (200; 3.460088ms)
Nov 22 23:43:34.830: INFO: (14) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: test<... (200; 3.3443ms)
Nov 22 23:43:34.835: INFO: (15) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:162/proxy/: bar (200; 3.401625ms)
Nov 22 23:43:34.835: INFO: (15) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:160/proxy/: foo (200; 3.421911ms)
Nov 22 23:43:34.835: INFO: (15) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:460/proxy/: tls baz (200; 3.533198ms)
Nov 22 23:43:34.835: INFO: (15) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: ... (200; 4.339426ms)
Nov 22 23:43:34.836: INFO: (15) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 4.480223ms)
Nov 22 23:43:34.837: INFO: (15) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname2/proxy/: bar (200; 5.512303ms)
Nov 22 23:43:34.838: INFO: (15) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname2/proxy/: bar (200; 5.659052ms)
Nov 22 23:43:34.838: INFO: (15) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname2/proxy/: tls qux (200; 5.662946ms)
Nov 22 23:43:34.838: INFO: (15) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname1/proxy/: foo (200; 5.706448ms)
Nov 22 23:43:34.838: INFO: (15) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname1/proxy/: foo (200; 5.759773ms)
Nov 22 23:43:34.838: INFO: (15) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname1/proxy/: tls baz (200; 5.850886ms)
Nov 22 23:43:34.841: INFO: (16) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:1080/proxy/: test<... (200; 3.275715ms)
Nov 22 23:43:34.842: INFO: (16) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:462/proxy/: tls qux (200; 3.870803ms)
Nov 22 23:43:34.842: INFO: (16) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 3.881674ms)
Nov 22 23:43:34.842: INFO: (16) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:162/proxy/: bar (200; 3.915585ms)
Nov 22 23:43:34.842: INFO: (16) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:460/proxy/: tls baz (200; 3.975711ms)
Nov 22 23:43:34.842: INFO: (16) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:160/proxy/: foo (200; 4.058694ms)
Nov 22 23:43:34.842: INFO: (16) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: ... (200; 4.019196ms)
Nov 22 23:43:34.843: INFO: (16) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname1/proxy/: foo (200; 5.004747ms)
Nov 22 23:43:34.843: INFO: (16) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname1/proxy/: tls baz (200; 4.937084ms)
Nov 22 23:43:34.843: INFO: (16) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname2/proxy/: bar (200; 4.955219ms)
Nov 22 23:43:34.843: INFO: (16) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname1/proxy/: foo (200; 5.020374ms)
Nov 22 23:43:34.843: INFO: (16) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname2/proxy/: bar (200; 5.082733ms)
Nov 22 23:43:34.843: INFO: (16) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname2/proxy/: tls qux (200; 5.119499ms)
Nov 22 23:43:34.845: INFO: (17) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:160/proxy/: foo (200; 2.227655ms)
Nov 22 23:43:34.845: INFO: (17) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 2.184316ms)
Nov 22 23:43:34.846: INFO: (17) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:1080/proxy/: test<... (200; 3.201141ms)
Nov 22 23:43:34.846: INFO: (17) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: ... (200; 3.355261ms)
Nov 22 23:43:34.847: INFO: (17) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:160/proxy/: foo (200; 3.938497ms)
Nov 22 23:43:34.847: INFO: (17) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname1/proxy/: foo (200; 3.990027ms)
Nov 22 23:43:34.847: INFO: (17) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 3.9255ms)
Nov 22 23:43:34.847: INFO: (17) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname2/proxy/: tls qux (200; 3.936482ms)
Nov 22 23:43:34.847: INFO: (17) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:162/proxy/: bar (200; 3.963573ms)
Nov 22 23:43:34.847: INFO: (17) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname1/proxy/: foo (200; 4.050887ms)
Nov 22 23:43:34.847: INFO: (17) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname1/proxy/: tls baz (200; 4.109016ms)
Nov 22 23:43:34.847: INFO: (17) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname2/proxy/: bar (200; 4.110528ms)
Nov 22 23:43:34.847: INFO: (17) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname2/proxy/: bar (200; 4.23381ms)
Nov 22 23:43:34.847: INFO: (17) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:460/proxy/: tls baz (200; 4.240601ms)
Nov 22 23:43:34.847: INFO: (17) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:462/proxy/: tls qux (200; 4.244283ms)
Nov 22 23:43:34.849: INFO: (18) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:160/proxy/: foo (200; 1.743229ms)
Nov 22 23:43:34.851: INFO: (18) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq/proxy/: test (200; 3.908318ms)
Nov 22 23:43:34.852: INFO: (18) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:160/proxy/: foo (200; 4.433811ms)
Nov 22 23:43:34.852: INFO: (18) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 4.44369ms)
Nov 22 23:43:34.852: INFO: (18) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:1080/proxy/: ... (200; 4.912702ms)
Nov 22 23:43:34.852: INFO: (18) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:162/proxy/: bar (200; 5.044991ms)
Nov 22 23:43:34.853: INFO: (18) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:1080/proxy/: test<... (200; 5.367735ms)
Nov 22 23:43:34.853: INFO: (18) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:460/proxy/: tls baz (200; 5.390634ms)
Nov 22 23:43:34.853: INFO: (18) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:462/proxy/: tls qux (200; 5.405236ms)
Nov 22 23:43:34.853: INFO: (18) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:443/proxy/: test (200; 3.505209ms)
Nov 22 23:43:34.861: INFO: (19) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:162/proxy/: bar (200; 3.554549ms)
Nov 22 23:43:34.861: INFO: (19) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:1080/proxy/: test<... (200; 3.642034ms)
Nov 22 23:43:34.861: INFO: (19) /api/v1/namespaces/proxy-4889/pods/proxy-service-f9bxt-87spq:160/proxy/: foo (200; 3.658053ms)
Nov 22 23:43:34.861: INFO: (19) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:460/proxy/: tls baz (200; 3.732602ms)
Nov 22 23:43:34.861: INFO: (19) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:1080/proxy/: ... (200; 3.672688ms)
Nov 22 23:43:34.861: INFO: (19) /api/v1/namespaces/proxy-4889/pods/https:proxy-service-f9bxt-87spq:462/proxy/: tls qux (200; 3.744359ms)
Nov 22 23:43:34.861: INFO: (19) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:162/proxy/: bar (200; 3.72515ms)
Nov 22 23:43:34.862: INFO: (19) /api/v1/namespaces/proxy-4889/pods/http:proxy-service-f9bxt-87spq:160/proxy/: foo (200; 4.242487ms)
Nov 22 23:43:34.862: INFO: (19) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname1/proxy/: foo (200; 4.671517ms)
Nov 22 23:43:34.862: INFO: (19) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname1/proxy/: tls baz (200; 4.841672ms)
Nov 22 23:43:34.862: INFO: (19) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname1/proxy/: foo (200; 4.888519ms)
Nov 22 23:43:34.862: INFO: (19) /api/v1/namespaces/proxy-4889/services/https:proxy-service-f9bxt:tlsportname2/proxy/: tls qux (200; 5.099739ms)
Nov 22 23:43:34.862: INFO: (19) /api/v1/namespaces/proxy-4889/services/http:proxy-service-f9bxt:portname2/proxy/: bar (200; 5.036088ms)
Nov 22 23:43:34.862: INFO: (19) /api/v1/namespaces/proxy-4889/services/proxy-service-f9bxt:portname2/proxy/: bar (200; 5.05271ms)
STEP: deleting ReplicationController proxy-service-f9bxt in namespace proxy-4889, will wait for the garbage collector to delete the pods
Nov 22 23:43:34.921: INFO: Deleting ReplicationController proxy-service-f9bxt took: 6.741747ms
Nov 22 23:43:35.221: INFO: Terminating ReplicationController proxy-service-f9bxt pods took: 300.209653ms
[AfterEach] version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:43:37.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4889" for this suite.
Nov 22 23:43:43.749: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:43:43.828: INFO: namespace proxy-4889 deletion completed in 6.089851024s

• [SLOW TEST:19.300 seconds]
[sig-network] Proxy
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:43:43.829: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Nov 22 23:43:43.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2132'
Nov 22 23:43:44.283: INFO: stderr: ""
Nov 22 23:43:44.283: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Nov 22 23:43:45.290: INFO: Selector matched 1 pods for map[app:redis]
Nov 22 23:43:45.291: INFO: Found 0 / 1
Nov 22 23:43:46.287: INFO: Selector matched 1 pods for map[app:redis]
Nov 22 23:43:46.287: INFO: Found 0 / 1
Nov 22 23:43:47.287: INFO: Selector matched 1 pods for map[app:redis]
Nov 22 23:43:47.288: INFO: Found 0 / 1
Nov 22 23:43:48.288: INFO: Selector matched 1 pods for map[app:redis]
Nov 22 23:43:48.288: INFO: Found 1 / 1
Nov 22 23:43:48.288: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Nov 22 23:43:48.292: INFO: Selector matched 1 pods for map[app:redis]
Nov 22 23:43:48.292: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Nov 22 23:43:48.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-lwnt9 redis-master --namespace=kubectl-2132'
Nov 22 23:43:48.409: INFO: stderr: ""
Nov 22 23:43:48.409: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 22 Nov 23:43:46.967 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Nov 23:43:46.967 # Server started, Redis version 3.2.12\n1:M 22 Nov 23:43:46.967 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Nov 23:43:46.967 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Nov 22 23:43:48.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-lwnt9 redis-master --namespace=kubectl-2132 --tail=1'
Nov 22 23:43:48.508: INFO: stderr: ""
Nov 22 23:43:48.508: INFO: stdout: "1:M 22 Nov 23:43:46.967 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Nov 22 23:43:48.508: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-lwnt9 redis-master --namespace=kubectl-2132 --limit-bytes=1'
Nov 22 23:43:48.607: INFO: stderr: ""
Nov 22 23:43:48.607: INFO: stdout: " "
STEP: exposing timestamps
Nov 22 23:43:48.607: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-lwnt9 redis-master --namespace=kubectl-2132 --tail=1 --timestamps'
Nov 22 23:43:48.709: INFO: stderr: ""
Nov 22 23:43:48.709: INFO: stdout: "2020-11-22T23:43:46.967985077Z 1:M 22 Nov 23:43:46.967 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Nov 22 23:43:51.209: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-lwnt9 redis-master --namespace=kubectl-2132 --since=1s'
Nov 22 23:43:51.313: INFO: stderr: ""
Nov 22 23:43:51.313: INFO: stdout: ""
Nov 22 23:43:51.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-lwnt9 redis-master --namespace=kubectl-2132 --since=24h'
Nov 22 23:43:51.433: INFO: stderr: ""
Nov 22 23:43:51.433: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 22 Nov 23:43:46.967 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 22 Nov 23:43:46.967 # Server started, Redis version 3.2.12\n1:M 22 Nov 23:43:46.967 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 22 Nov 23:43:46.967 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Nov 22 23:43:51.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2132'
Nov 22 23:43:51.535: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Nov 22 23:43:51.535: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Nov 22 23:43:51.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-2132'
Nov 22 23:43:51.637: INFO: stderr: "No resources found.\n"
Nov 22 23:43:51.637: INFO: stdout: ""
Nov 22 23:43:51.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-2132 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Nov 22 23:43:51.756: INFO: stderr: ""
Nov 22 23:43:51.756: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:43:51.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2132" for this suite.
Nov 22 23:44:13.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:44:13.864: INFO: namespace kubectl-2132 deletion completed in 22.105526327s

• [SLOW TEST:30.036 seconds]
[sig-cli] Kubectl client
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:44:13.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 22 23:44:13.922: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ff71b538-7347-4fb7-81a2-9dc668eaac84" in namespace "projected-4998" to be "success or failure"
Nov 22 23:44:13.957: INFO: Pod "downwardapi-volume-ff71b538-7347-4fb7-81a2-9dc668eaac84": Phase="Pending", Reason="", readiness=false. Elapsed: 35.285181ms
Nov 22 23:44:15.961: INFO: Pod "downwardapi-volume-ff71b538-7347-4fb7-81a2-9dc668eaac84": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039380296s
Nov 22 23:44:17.965: INFO: Pod "downwardapi-volume-ff71b538-7347-4fb7-81a2-9dc668eaac84": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043649617s
STEP: Saw pod success
Nov 22 23:44:17.965: INFO: Pod "downwardapi-volume-ff71b538-7347-4fb7-81a2-9dc668eaac84" satisfied condition "success or failure"
Nov 22 23:44:17.968: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ff71b538-7347-4fb7-81a2-9dc668eaac84 container client-container: 
STEP: delete the pod
Nov 22 23:44:17.987: INFO: Waiting for pod downwardapi-volume-ff71b538-7347-4fb7-81a2-9dc668eaac84 to disappear
Nov 22 23:44:18.028: INFO: Pod downwardapi-volume-ff71b538-7347-4fb7-81a2-9dc668eaac84 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:44:18.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4998" for this suite.
Nov 22 23:44:24.043: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:44:24.120: INFO: namespace projected-4998 deletion completed in 6.089007876s

• [SLOW TEST:10.256 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:44:24.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:44:28.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4805" for this suite.
Nov 22 23:45:08.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:45:08.377: INFO: namespace kubelet-test-4805 deletion completed in 40.104730635s

• [SLOW TEST:44.256 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command in a pod
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:45:08.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-7595ca74-d47b-4111-bf3a-36875016dc7f
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-7595ca74-d47b-4111-bf3a-36875016dc7f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:46:28.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6452" for this suite.
Nov 22 23:46:50.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:46:50.908: INFO: namespace configmap-6452 deletion completed in 22.079860626s

• [SLOW TEST:102.531 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:46:50.908: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Nov 22 23:46:56.024: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:46:57.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-1507" for this suite.
Nov 22 23:47:19.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:47:19.180: INFO: namespace replicaset-1507 deletion completed in 22.125541377s

• [SLOW TEST:28.272 seconds]
[sig-apps] ReplicaSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:47:19.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-da68b1e8-b9c6-448e-9004-b77358cef4c3
STEP: Creating a pod to test consume secrets
Nov 22 23:47:19.248: INFO: Waiting up to 5m0s for pod "pod-secrets-a252a50d-b86b-4057-9d52-3e54b9d9371c" in namespace "secrets-745" to be "success or failure"
Nov 22 23:47:19.252: INFO: Pod "pod-secrets-a252a50d-b86b-4057-9d52-3e54b9d9371c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.700817ms
Nov 22 23:47:21.421: INFO: Pod "pod-secrets-a252a50d-b86b-4057-9d52-3e54b9d9371c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.172842719s
Nov 22 23:47:23.425: INFO: Pod "pod-secrets-a252a50d-b86b-4057-9d52-3e54b9d9371c": Phase="Running", Reason="", readiness=true. Elapsed: 4.176783502s
Nov 22 23:47:25.429: INFO: Pod "pod-secrets-a252a50d-b86b-4057-9d52-3e54b9d9371c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.180728155s
STEP: Saw pod success
Nov 22 23:47:25.429: INFO: Pod "pod-secrets-a252a50d-b86b-4057-9d52-3e54b9d9371c" satisfied condition "success or failure"
Nov 22 23:47:25.431: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-a252a50d-b86b-4057-9d52-3e54b9d9371c container secret-env-test: 
STEP: delete the pod
Nov 22 23:47:25.493: INFO: Waiting for pod pod-secrets-a252a50d-b86b-4057-9d52-3e54b9d9371c to disappear
Nov 22 23:47:25.497: INFO: Pod pod-secrets-a252a50d-b86b-4057-9d52-3e54b9d9371c no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:47:25.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-745" for this suite.
Nov 22 23:47:31.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:47:31.607: INFO: namespace secrets-745 deletion completed in 6.105717709s

• [SLOW TEST:12.426 seconds]
[sig-api-machinery] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:47:31.608: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:47:31.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2571" for this suite.
Nov 22 23:47:37.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:47:37.777: INFO: namespace services-2571 deletion completed in 6.10309255s
[AfterEach] [sig-network] Services
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.169 seconds]
[sig-network] Services
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:47:37.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-b70fdc15-fc12-4faf-b01f-d0425156f795
STEP: Creating configMap with name cm-test-opt-upd-e7e9c651-5643-4580-88ff-1ad8568f3d7e
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-b70fdc15-fc12-4faf-b01f-d0425156f795
STEP: Updating configmap cm-test-opt-upd-e7e9c651-5643-4580-88ff-1ad8568f3d7e
STEP: Creating configMap with name cm-test-opt-create-7171ee90-e3dc-4d49-a55e-599b8330744b
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:47:45.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7667" for this suite.
Nov 22 23:48:07.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:48:08.064: INFO: namespace configmap-7667 deletion completed in 22.093288322s

• [SLOW TEST:30.286 seconds]
[sig-storage] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:48:08.064: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1427
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-1427
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1427
Nov 22 23:48:09.737: INFO: Found 0 stateful pods, waiting for 1
Nov 22 23:48:19.742: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Nov 22 23:48:19.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1427 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Nov 22 23:48:23.784: INFO: stderr: "I1122 23:48:23.609875    3777 log.go:172] (0xc000b5a420) (0xc000622aa0) Create stream\nI1122 23:48:23.609906    3777 log.go:172] (0xc000b5a420) (0xc000622aa0) Stream added, broadcasting: 1\nI1122 23:48:23.612440    3777 log.go:172] (0xc000b5a420) Reply frame received for 1\nI1122 23:48:23.612481    3777 log.go:172] (0xc000b5a420) (0xc0007460a0) Create stream\nI1122 23:48:23.612496    3777 log.go:172] (0xc000b5a420) (0xc0007460a0) Stream added, broadcasting: 3\nI1122 23:48:23.613829    3777 log.go:172] (0xc000b5a420) Reply frame received for 3\nI1122 23:48:23.613894    3777 log.go:172] (0xc000b5a420) (0xc000300000) Create stream\nI1122 23:48:23.613918    3777 log.go:172] (0xc000b5a420) (0xc000300000) Stream added, broadcasting: 5\nI1122 23:48:23.615041    3777 log.go:172] (0xc000b5a420) Reply frame received for 5\nI1122 23:48:23.688146    3777 log.go:172] (0xc000b5a420) Data frame received for 5\nI1122 23:48:23.688176    3777 log.go:172] (0xc000300000) (5) Data frame handling\nI1122 23:48:23.688197    3777 log.go:172] (0xc000300000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1122 23:48:23.772749    3777 log.go:172] (0xc000b5a420) Data frame received for 3\nI1122 23:48:23.772794    3777 log.go:172] (0xc0007460a0) (3) Data frame handling\nI1122 23:48:23.772826    3777 log.go:172] (0xc0007460a0) (3) Data frame sent\nI1122 23:48:23.773382    3777 log.go:172] (0xc000b5a420) Data frame received for 3\nI1122 23:48:23.773417    3777 log.go:172] (0xc0007460a0) (3) Data frame handling\nI1122 23:48:23.773455    3777 log.go:172] (0xc000b5a420) Data frame received for 5\nI1122 23:48:23.773466    3777 log.go:172] (0xc000300000) (5) Data frame handling\nI1122 23:48:23.775220    3777 log.go:172] (0xc000b5a420) Data frame received for 1\nI1122 23:48:23.775252    3777 log.go:172] (0xc000622aa0) (1) Data frame handling\nI1122 23:48:23.775266    3777 log.go:172] (0xc000622aa0) (1) Data frame sent\nI1122 23:48:23.775280    3777 log.go:172] (0xc000b5a420) (0xc000622aa0) Stream removed, broadcasting: 1\nI1122 23:48:23.775300    3777 log.go:172] (0xc000b5a420) Go away received\nI1122 23:48:23.775643    3777 log.go:172] (0xc000b5a420) (0xc000622aa0) Stream removed, broadcasting: 1\nI1122 23:48:23.775656    3777 log.go:172] (0xc000b5a420) (0xc0007460a0) Stream removed, broadcasting: 3\nI1122 23:48:23.775662    3777 log.go:172] (0xc000b5a420) (0xc000300000) Stream removed, broadcasting: 5\n"
Nov 22 23:48:23.785: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Nov 22 23:48:23.785: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Nov 22 23:48:23.788: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Nov 22 23:48:33.792: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Nov 22 23:48:33.792: INFO: Waiting for statefulset status.replicas updated to 0
Nov 22 23:48:33.832: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999499s
Nov 22 23:48:34.837: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.966594807s
Nov 22 23:48:35.842: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.961235895s
Nov 22 23:48:36.847: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.956958858s
Nov 22 23:48:37.852: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.951574004s
Nov 22 23:48:38.857: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.946801195s
Nov 22 23:48:39.861: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.941837669s
Nov 22 23:48:40.869: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.937496866s
Nov 22 23:48:41.874: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.929428801s
Nov 22 23:48:42.879: INFO: Verifying statefulset ss doesn't scale past 1 for another 924.49135ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1427
Nov 22 23:48:43.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1427 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Nov 22 23:48:44.126: INFO: stderr: "I1122 23:48:44.015670    3809 log.go:172] (0xc000140f20) (0xc0006eaaa0) Create stream\nI1122 23:48:44.015730    3809 log.go:172] (0xc000140f20) (0xc0006eaaa0) Stream added, broadcasting: 1\nI1122 23:48:44.018760    3809 log.go:172] (0xc000140f20) Reply frame received for 1\nI1122 23:48:44.018814    3809 log.go:172] (0xc000140f20) (0xc0007ac000) Create stream\nI1122 23:48:44.018835    3809 log.go:172] (0xc000140f20) (0xc0007ac000) Stream added, broadcasting: 3\nI1122 23:48:44.019842    3809 log.go:172] (0xc000140f20) Reply frame received for 3\nI1122 23:48:44.019873    3809 log.go:172] (0xc000140f20) (0xc0007ac0a0) Create stream\nI1122 23:48:44.019884    3809 log.go:172] (0xc000140f20) (0xc0007ac0a0) Stream added, broadcasting: 5\nI1122 23:48:44.020959    3809 log.go:172] (0xc000140f20) Reply frame received for 5\nI1122 23:48:44.113239    3809 log.go:172] (0xc000140f20) Data frame received for 3\nI1122 23:48:44.113274    3809 log.go:172] (0xc0007ac000) (3) Data frame handling\nI1122 23:48:44.113283    3809 log.go:172] (0xc0007ac000) (3) Data frame sent\nI1122 23:48:44.113292    3809 log.go:172] (0xc000140f20) Data frame received for 3\nI1122 23:48:44.113303    3809 log.go:172] (0xc0007ac000) (3) Data frame handling\nI1122 23:48:44.113339    3809 log.go:172] (0xc000140f20) Data frame received for 5\nI1122 23:48:44.113351    3809 log.go:172] (0xc0007ac0a0) (5) Data frame handling\nI1122 23:48:44.113367    3809 log.go:172] (0xc0007ac0a0) (5) Data frame sent\nI1122 23:48:44.113376    3809 log.go:172] (0xc000140f20) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1122 23:48:44.113384    3809 log.go:172] (0xc0007ac0a0) (5) Data frame handling\nI1122 23:48:44.114853    3809 log.go:172] (0xc000140f20) Data frame received for 1\nI1122 23:48:44.114875    3809 log.go:172] (0xc0006eaaa0) (1) Data frame handling\nI1122 23:48:44.114887    3809 log.go:172] (0xc0006eaaa0) (1) Data frame sent\nI1122 23:48:44.114899    3809 log.go:172] (0xc000140f20) (0xc0006eaaa0) Stream removed, broadcasting: 1\nI1122 23:48:44.114915    3809 log.go:172] (0xc000140f20) Go away received\nI1122 23:48:44.115294    3809 log.go:172] (0xc000140f20) (0xc0006eaaa0) Stream removed, broadcasting: 1\nI1122 23:48:44.115318    3809 log.go:172] (0xc000140f20) (0xc0007ac000) Stream removed, broadcasting: 3\nI1122 23:48:44.115328    3809 log.go:172] (0xc000140f20) (0xc0007ac0a0) Stream removed, broadcasting: 5\n"
Nov 22 23:48:44.126: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Nov 22 23:48:44.126: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Nov 22 23:48:44.129: INFO: Found 1 stateful pods, waiting for 3
Nov 22 23:48:54.134: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Nov 22 23:48:54.134: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Nov 22 23:48:54.134: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Nov 22 23:48:54.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1427 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Nov 22 23:48:54.347: INFO: stderr: "I1122 23:48:54.273498    3830 log.go:172] (0xc0009de420) (0xc0008a0640) Create stream\nI1122 23:48:54.273555    3830 log.go:172] (0xc0009de420) (0xc0008a0640) Stream added, broadcasting: 1\nI1122 23:48:54.276293    3830 log.go:172] (0xc0009de420) Reply frame received for 1\nI1122 23:48:54.276335    3830 log.go:172] (0xc0009de420) (0xc000922000) Create stream\nI1122 23:48:54.276346    3830 log.go:172] (0xc0009de420) (0xc000922000) Stream added, broadcasting: 3\nI1122 23:48:54.277640    3830 log.go:172] (0xc0009de420) Reply frame received for 3\nI1122 23:48:54.277702    3830 log.go:172] (0xc0009de420) (0xc000598280) Create stream\nI1122 23:48:54.277727    3830 log.go:172] (0xc0009de420) (0xc000598280) Stream added, broadcasting: 5\nI1122 23:48:54.278594    3830 log.go:172] (0xc0009de420) Reply frame received for 5\nI1122 23:48:54.339983    3830 log.go:172] (0xc0009de420) Data frame received for 3\nI1122 23:48:54.340015    3830 log.go:172] (0xc000922000) (3) Data frame handling\nI1122 23:48:54.340023    3830 log.go:172] (0xc000922000) (3) Data frame sent\nI1122 23:48:54.340028    3830 log.go:172] (0xc0009de420) Data frame received for 3\nI1122 23:48:54.340034    3830 log.go:172] (0xc000922000) (3) Data frame handling\nI1122 23:48:54.340041    3830 log.go:172] (0xc0009de420) Data frame received for 5\nI1122 23:48:54.340045    3830 log.go:172] (0xc000598280) (5) Data frame handling\nI1122 23:48:54.340056    3830 log.go:172] (0xc000598280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1122 23:48:54.340065    3830 log.go:172] (0xc0009de420) Data frame received for 5\nI1122 23:48:54.340132    3830 log.go:172] (0xc000598280) (5) Data frame handling\nI1122 23:48:54.341579    3830 log.go:172] (0xc0009de420) Data frame received for 1\nI1122 23:48:54.341595    3830 log.go:172] (0xc0008a0640) (1) Data frame handling\nI1122 23:48:54.341605    3830 log.go:172] (0xc0008a0640) (1) Data frame sent\nI1122 23:48:54.341616    3830 log.go:172] (0xc0009de420) (0xc0008a0640) Stream removed, broadcasting: 1\nI1122 23:48:54.341676    3830 log.go:172] (0xc0009de420) Go away received\nI1122 23:48:54.341863    3830 log.go:172] (0xc0009de420) (0xc0008a0640) Stream removed, broadcasting: 1\nI1122 23:48:54.341875    3830 log.go:172] (0xc0009de420) (0xc000922000) Stream removed, broadcasting: 3\nI1122 23:48:54.341880    3830 log.go:172] (0xc0009de420) (0xc000598280) Stream removed, broadcasting: 5\n"
Nov 22 23:48:54.347: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Nov 22 23:48:54.347: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Nov 22 23:48:54.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1427 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Nov 22 23:48:54.584: INFO: stderr: "I1122 23:48:54.480136    3851 log.go:172] (0xc0008d02c0) (0xc000694780) Create stream\nI1122 23:48:54.480210    3851 log.go:172] (0xc0008d02c0) (0xc000694780) Stream added, broadcasting: 1\nI1122 23:48:54.486588    3851 log.go:172] (0xc0008d02c0) Reply frame received for 1\nI1122 23:48:54.486650    3851 log.go:172] (0xc0008d02c0) (0xc000694820) Create stream\nI1122 23:48:54.486669    3851 log.go:172] (0xc0008d02c0) (0xc000694820) Stream added, broadcasting: 3\nI1122 23:48:54.488367    3851 log.go:172] (0xc0008d02c0) Reply frame received for 3\nI1122 23:48:54.488707    3851 log.go:172] (0xc0008d02c0) (0xc000396be0) Create stream\nI1122 23:48:54.489047    3851 log.go:172] (0xc0008d02c0) (0xc000396be0) Stream added, broadcasting: 5\nI1122 23:48:54.491718    3851 log.go:172] (0xc0008d02c0) Reply frame received for 5\nI1122 23:48:54.547117    3851 log.go:172] (0xc0008d02c0) Data frame received for 5\nI1122 23:48:54.547139    3851 log.go:172] (0xc000396be0) (5) Data frame handling\nI1122 23:48:54.547150    3851 log.go:172] (0xc000396be0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1122 23:48:54.572604    3851 log.go:172] (0xc0008d02c0) Data frame received for 3\nI1122 23:48:54.572639    3851 log.go:172] (0xc000694820) (3) Data frame handling\nI1122 23:48:54.572672    3851 log.go:172] (0xc000694820) (3) Data frame sent\nI1122 23:48:54.573254    3851 log.go:172] (0xc0008d02c0) Data frame received for 5\nI1122 23:48:54.573310    3851 log.go:172] (0xc000396be0) (5) Data frame handling\nI1122 23:48:54.573354    3851 log.go:172] (0xc0008d02c0) Data frame received for 3\nI1122 23:48:54.573375    3851 log.go:172] (0xc000694820) (3) Data frame handling\nI1122 23:48:54.575665    3851 log.go:172] (0xc0008d02c0) Data frame received for 1\nI1122 23:48:54.575701    3851 log.go:172] (0xc000694780) (1) Data frame handling\nI1122 23:48:54.575746    3851 log.go:172] (0xc000694780) (1) Data frame sent\nI1122 23:48:54.575771    3851 log.go:172] (0xc0008d02c0) (0xc000694780) Stream removed, broadcasting: 1\nI1122 23:48:54.575799    3851 log.go:172] (0xc0008d02c0) Go away received\nI1122 23:48:54.576421    3851 log.go:172] (0xc0008d02c0) (0xc000694780) Stream removed, broadcasting: 1\nI1122 23:48:54.576446    3851 log.go:172] (0xc0008d02c0) (0xc000694820) Stream removed, broadcasting: 3\nI1122 23:48:54.576458    3851 log.go:172] (0xc0008d02c0) (0xc000396be0) Stream removed, broadcasting: 5\n"
Nov 22 23:48:54.584: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Nov 22 23:48:54.584: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Nov 22 23:48:54.584: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1427 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Nov 22 23:48:55.013: INFO: stderr: "I1122 23:48:54.704515    3870 log.go:172] (0xc00010cdc0) (0xc0001f8820) Create stream\nI1122 23:48:54.704576    3870 log.go:172] (0xc00010cdc0) (0xc0001f8820) Stream added, broadcasting: 1\nI1122 23:48:54.707628    3870 log.go:172] (0xc00010cdc0) Reply frame received for 1\nI1122 23:48:54.707660    3870 log.go:172] (0xc00010cdc0) (0xc0005e8320) Create stream\nI1122 23:48:54.707667    3870 log.go:172] (0xc00010cdc0) (0xc0005e8320) Stream added, broadcasting: 3\nI1122 23:48:54.708540    3870 log.go:172] (0xc00010cdc0) Reply frame received for 3\nI1122 23:48:54.708590    3870 log.go:172] (0xc00010cdc0) (0xc0001f88c0) Create stream\nI1122 23:48:54.708613    3870 log.go:172] (0xc00010cdc0) (0xc0001f88c0) Stream added, broadcasting: 5\nI1122 23:48:54.709747    3870 log.go:172] (0xc00010cdc0) Reply frame received for 5\nI1122 23:48:54.770013    3870 log.go:172] (0xc00010cdc0) Data frame received for 5\nI1122 23:48:54.770057    3870 log.go:172] (0xc0001f88c0) (5) Data frame handling\nI1122 23:48:54.770085    3870 log.go:172] (0xc0001f88c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1122 23:48:55.004115    3870 log.go:172] (0xc00010cdc0) Data frame received for 3\nI1122 23:48:55.004275    3870 log.go:172] (0xc0005e8320) (3) Data frame handling\nI1122 23:48:55.004335    3870 log.go:172] (0xc0005e8320) (3) Data frame sent\nI1122 23:48:55.004366    3870 log.go:172] (0xc00010cdc0) Data frame received for 3\nI1122 23:48:55.004397    3870 log.go:172] (0xc00010cdc0) Data frame received for 5\nI1122 23:48:55.004424    3870 log.go:172] (0xc0001f88c0) (5) Data frame handling\nI1122 23:48:55.004437    3870 log.go:172] (0xc0005e8320) (3) Data frame handling\nI1122 23:48:55.006631    3870 log.go:172] (0xc00010cdc0) Data frame received for 1\nI1122 23:48:55.006650    3870 log.go:172] (0xc0001f8820) (1) Data frame handling\nI1122 23:48:55.006661    3870 log.go:172] (0xc0001f8820) (1) Data frame sent\nI1122 23:48:55.006672    3870 log.go:172] (0xc00010cdc0) (0xc0001f8820) Stream removed, broadcasting: 1\nI1122 23:48:55.006919    3870 log.go:172] (0xc00010cdc0) (0xc0001f8820) Stream removed, broadcasting: 1\nI1122 23:48:55.006934    3870 log.go:172] (0xc00010cdc0) (0xc0005e8320) Stream removed, broadcasting: 3\nI1122 23:48:55.006940    3870 log.go:172] (0xc00010cdc0) (0xc0001f88c0) Stream removed, broadcasting: 5\nI1122 23:48:55.006958    3870 log.go:172] (0xc00010cdc0) Go away received\n"
Nov 22 23:48:55.013: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Nov 22 23:48:55.013: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Nov 22 23:48:55.014: INFO: Waiting for statefulset status.replicas updated to 0
Nov 22 23:48:55.017: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3
Nov 22 23:49:05.025: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Nov 22 23:49:05.025: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Nov 22 23:49:05.025: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Nov 22 23:49:05.038: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999462s
Nov 22 23:49:06.043: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.992682034s
Nov 22 23:49:07.049: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.987387905s
Nov 22 23:49:08.054: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981774344s
Nov 22 23:49:09.059: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.976890517s
Nov 22 23:49:10.084: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971808258s
Nov 22 23:49:11.089: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.946651376s
Nov 22 23:49:12.094: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.941251348s
Nov 22 23:49:13.100: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.93659517s
Nov 22 23:49:14.104: INFO: Verifying statefulset ss doesn't scale past 3 for another 930.876744ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1427
Nov 22 23:49:15.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1427 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Nov 22 23:49:15.374: INFO: stderr: "I1122 23:49:15.279485    3893 log.go:172] (0xc000a4a580) (0xc0002e4820) Create stream\nI1122 23:49:15.279548    3893 log.go:172] (0xc000a4a580) (0xc0002e4820) Stream added, broadcasting: 1\nI1122 23:49:15.284459    3893 log.go:172] (0xc000a4a580) Reply frame received for 1\nI1122 23:49:15.284504    3893 log.go:172] (0xc000a4a580) (0xc0002e4000) Create stream\nI1122 23:49:15.284521    3893 log.go:172] (0xc000a4a580) (0xc0002e4000) Stream added, broadcasting: 3\nI1122 23:49:15.285625    3893 log.go:172] (0xc000a4a580) Reply frame received for 3\nI1122 23:49:15.285681    3893 log.go:172] (0xc000a4a580) (0xc0002e4140) Create stream\nI1122 23:49:15.285698    3893 log.go:172] (0xc000a4a580) (0xc0002e4140) Stream added, broadcasting: 5\nI1122 23:49:15.286824    3893 log.go:172] (0xc000a4a580) Reply frame received for 5\nI1122 23:49:15.363400    3893 log.go:172] (0xc000a4a580) Data frame received for 5\nI1122 23:49:15.363427    3893 log.go:172] (0xc0002e4140) (5) Data frame handling\nI1122 23:49:15.363435    3893 log.go:172] (0xc0002e4140) (5) Data frame sent\nI1122 23:49:15.363442    3893 log.go:172] (0xc000a4a580) Data frame received for 5\nI1122 23:49:15.363446    3893 log.go:172] (0xc0002e4140) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1122 23:49:15.363463    3893 log.go:172] (0xc000a4a580) Data frame received for 3\nI1122 23:49:15.363468    3893 log.go:172] (0xc0002e4000) (3) Data frame handling\nI1122 23:49:15.363475    3893 log.go:172] (0xc0002e4000) (3) Data frame sent\nI1122 23:49:15.363479    3893 log.go:172] (0xc000a4a580) Data frame received for 3\nI1122 23:49:15.363484    3893 log.go:172] (0xc0002e4000) (3) Data frame handling\nI1122 23:49:15.364693    3893 log.go:172] (0xc000a4a580) Data frame received for 1\nI1122 23:49:15.364717    3893 log.go:172] (0xc0002e4820) (1) Data frame handling\nI1122 23:49:15.364726    3893 log.go:172] (0xc0002e4820) (1) Data frame sent\nI1122 23:49:15.364739    3893 log.go:172] (0xc000a4a580) (0xc0002e4820) Stream removed, broadcasting: 1\nI1122 23:49:15.364772    3893 log.go:172] (0xc000a4a580) Go away received\nI1122 23:49:15.365097    3893 log.go:172] (0xc000a4a580) (0xc0002e4820) Stream removed, broadcasting: 1\nI1122 23:49:15.365120    3893 log.go:172] (0xc000a4a580) (0xc0002e4000) Stream removed, broadcasting: 3\nI1122 23:49:15.365129    3893 log.go:172] (0xc000a4a580) (0xc0002e4140) Stream removed, broadcasting: 5\n"
Nov 22 23:49:15.375: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Nov 22 23:49:15.375: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Nov 22 23:49:15.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1427 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Nov 22 23:49:15.556: INFO: stderr: "I1122 23:49:15.489395    3913 log.go:172] (0xc00080c210) (0xc000954640) Create stream\nI1122 23:49:15.489456    3913 log.go:172] (0xc00080c210) (0xc000954640) Stream added, broadcasting: 1\nI1122 23:49:15.491778    3913 log.go:172] (0xc00080c210) Reply frame received for 1\nI1122 23:49:15.491815    3913 log.go:172] (0xc00080c210) (0xc000500280) Create stream\nI1122 23:49:15.491826    3913 log.go:172] (0xc00080c210) (0xc000500280) Stream added, broadcasting: 3\nI1122 23:49:15.492652    3913 log.go:172] (0xc00080c210) Reply frame received for 3\nI1122 23:49:15.492674    3913 log.go:172] (0xc00080c210) (0xc000500320) Create stream\nI1122 23:49:15.492681    3913 log.go:172] (0xc00080c210) (0xc000500320) Stream added, broadcasting: 5\nI1122 23:49:15.493799    3913 log.go:172] (0xc00080c210) Reply frame received for 5\nI1122 23:49:15.548290    3913 log.go:172] (0xc00080c210) Data frame received for 5\nI1122 23:49:15.548334    3913 log.go:172] (0xc000500320) (5) Data frame handling\nI1122 23:49:15.548352    3913 log.go:172] (0xc000500320) (5) Data frame sent\nI1122 23:49:15.548366    3913 log.go:172] (0xc00080c210) Data frame received for 5\nI1122 23:49:15.548375    3913 log.go:172] (0xc000500320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1122 23:49:15.548401    3913 log.go:172] (0xc00080c210) Data frame received for 3\nI1122 23:49:15.548414    3913 log.go:172] (0xc000500280) (3) Data frame handling\nI1122 23:49:15.548442    3913 log.go:172] (0xc000500280) (3) Data frame sent\nI1122 23:49:15.548454    3913 log.go:172] (0xc00080c210) Data frame received for 3\nI1122 23:49:15.548465    3913 log.go:172] (0xc000500280) (3) Data frame handling\nI1122 23:49:15.549631    3913 log.go:172] (0xc00080c210) Data frame received for 1\nI1122 23:49:15.549663    3913 log.go:172] (0xc000954640) (1) Data frame handling\nI1122 23:49:15.549679    3913 log.go:172] (0xc000954640) (1) Data frame sent\nI1122 23:49:15.549693    3913 log.go:172] (0xc00080c210) (0xc000954640) Stream removed, broadcasting: 1\nI1122 23:49:15.549712    3913 log.go:172] (0xc00080c210) Go away received\nI1122 23:49:15.550238    3913 log.go:172] (0xc00080c210) (0xc000954640) Stream removed, broadcasting: 1\nI1122 23:49:15.550266    3913 log.go:172] (0xc00080c210) (0xc000500280) Stream removed, broadcasting: 3\nI1122 23:49:15.550276    3913 log.go:172] (0xc00080c210) (0xc000500320) Stream removed, broadcasting: 5\n"
Nov 22 23:49:15.556: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Nov 22 23:49:15.556: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Nov 22 23:49:15.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1427 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Nov 22 23:49:15.740: INFO: stderr: "I1122 23:49:15.670721    3934 log.go:172] (0xc0001288f0) (0xc0005c8aa0) Create stream\nI1122 23:49:15.670791    3934 log.go:172] (0xc0001288f0) (0xc0005c8aa0) Stream added, broadcasting: 1\nI1122 23:49:15.674203    3934 log.go:172] (0xc0001288f0) Reply frame received for 1\nI1122 23:49:15.674248    3934 log.go:172] (0xc0001288f0) (0xc00096a000) Create stream\nI1122 23:49:15.674261    3934 log.go:172] (0xc0001288f0) (0xc00096a000) Stream added, broadcasting: 3\nI1122 23:49:15.675293    3934 log.go:172] (0xc0001288f0) Reply frame received for 3\nI1122 23:49:15.675348    3934 log.go:172] (0xc0001288f0) (0xc000956000) Create stream\nI1122 23:49:15.675374    3934 log.go:172] (0xc0001288f0) (0xc000956000) Stream added, broadcasting: 5\nI1122 23:49:15.676416    3934 log.go:172] (0xc0001288f0) Reply frame received for 5\nI1122 23:49:15.731135    3934 log.go:172] (0xc0001288f0) Data frame received for 3\nI1122 23:49:15.731175    3934 log.go:172] (0xc00096a000) (3) Data frame handling\nI1122 23:49:15.731201    3934 log.go:172] (0xc00096a000) (3) Data frame sent\nI1122 23:49:15.731303    3934 log.go:172] (0xc0001288f0) Data frame received for 5\nI1122 23:49:15.731346    3934 log.go:172] (0xc000956000) (5) Data frame handling\nI1122 23:49:15.731363    3934 log.go:172] (0xc000956000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1122 23:49:15.731377    3934 log.go:172] (0xc0001288f0) Data frame received for 3\nI1122 23:49:15.731388    3934 log.go:172] (0xc00096a000) (3) Data frame handling\nI1122 23:49:15.731421    3934 log.go:172] (0xc0001288f0) Data frame received for 5\nI1122 23:49:15.731447    3934 log.go:172] (0xc000956000) (5) Data frame handling\nI1122 23:49:15.733656    3934 log.go:172] (0xc0001288f0) Data frame received for 1\nI1122 23:49:15.733670    3934 log.go:172] (0xc0005c8aa0) (1) Data frame handling\nI1122 23:49:15.733677    3934 log.go:172] (0xc0005c8aa0) (1) Data frame sent\nI1122 23:49:15.733687    3934 log.go:172] (0xc0001288f0) (0xc0005c8aa0) Stream removed, broadcasting: 1\nI1122 23:49:15.733725    3934 log.go:172] (0xc0001288f0) Go away received\nI1122 23:49:15.734035    3934 log.go:172] (0xc0001288f0) (0xc0005c8aa0) Stream removed, broadcasting: 1\nI1122 23:49:15.734050    3934 log.go:172] (0xc0001288f0) (0xc00096a000) Stream removed, broadcasting: 3\nI1122 23:49:15.734056    3934 log.go:172] (0xc0001288f0) (0xc000956000) Stream removed, broadcasting: 5\n"
Nov 22 23:49:15.740: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Nov 22 23:49:15.740: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Nov 22 23:49:15.740: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Nov 22 23:49:45.770: INFO: Deleting all statefulset in ns statefulset-1427
Nov 22 23:49:45.773: INFO: Scaling statefulset ss to 0
Nov 22 23:49:45.782: INFO: Waiting for statefulset status.replicas updated to 0
Nov 22 23:49:45.784: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:49:45.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1427" for this suite.
Nov 22 23:49:51.809: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:49:51.906: INFO: namespace statefulset-1427 deletion completed in 6.106913164s

• [SLOW TEST:103.842 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:49:51.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-1075/configmap-test-69065024-201b-44e0-abfe-3c67be4a945b
STEP: Creating a pod to test consume configMaps
Nov 22 23:49:52.058: INFO: Waiting up to 5m0s for pod "pod-configmaps-4fa7cbcb-a498-4272-8d9b-a77a184d59d9" in namespace "configmap-1075" to be "success or failure"
Nov 22 23:49:52.062: INFO: Pod "pod-configmaps-4fa7cbcb-a498-4272-8d9b-a77a184d59d9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.841152ms
Nov 22 23:49:54.066: INFO: Pod "pod-configmaps-4fa7cbcb-a498-4272-8d9b-a77a184d59d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007691728s
Nov 22 23:49:56.070: INFO: Pod "pod-configmaps-4fa7cbcb-a498-4272-8d9b-a77a184d59d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011334708s
STEP: Saw pod success
Nov 22 23:49:56.070: INFO: Pod "pod-configmaps-4fa7cbcb-a498-4272-8d9b-a77a184d59d9" satisfied condition "success or failure"
Nov 22 23:49:56.072: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-4fa7cbcb-a498-4272-8d9b-a77a184d59d9 container env-test: 
STEP: delete the pod
Nov 22 23:49:56.115: INFO: Waiting for pod pod-configmaps-4fa7cbcb-a498-4272-8d9b-a77a184d59d9 to disappear
Nov 22 23:49:56.122: INFO: Pod pod-configmaps-4fa7cbcb-a498-4272-8d9b-a77a184d59d9 no longer exists
[AfterEach] [sig-node] ConfigMap
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:49:56.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1075" for this suite.
Nov 22 23:50:02.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:50:02.227: INFO: namespace configmap-1075 deletion completed in 6.101836145s

• [SLOW TEST:10.321 seconds]
[sig-node] ConfigMap
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:50:02.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Nov 22 23:50:02.324: INFO: Creating deployment "nginx-deployment"
Nov 22 23:50:02.339: INFO: Waiting for observed generation 1
Nov 22 23:50:04.354: INFO: Waiting for all required pods to come up
Nov 22 23:50:04.358: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Nov 22 23:50:14.368: INFO: Waiting for deployment "nginx-deployment" to complete
Nov 22 23:50:14.374: INFO: Updating deployment "nginx-deployment" with a non-existent image
Nov 22 23:50:14.394: INFO: Updating deployment nginx-deployment
Nov 22 23:50:14.394: INFO: Waiting for observed generation 2
Nov 22 23:50:16.417: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Nov 22 23:50:16.420: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Nov 22 23:50:16.422: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Nov 22 23:50:16.430: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Nov 22 23:50:16.431: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Nov 22 23:50:16.433: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Nov 22 23:50:16.438: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Nov 22 23:50:16.438: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Nov 22 23:50:16.443: INFO: Updating deployment nginx-deployment
Nov 22 23:50:16.443: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Nov 22 23:50:16.544: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Nov 22 23:50:16.568: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Nov 22 23:50:16.679: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-6691,SelfLink:/apis/apps/v1/namespaces/deployment-6691/deployments/nginx-deployment,UID:b3c319a8-a0d9-43e3-9923-2d9053ea3bc2,ResourceVersion:10994336,Generation:3,CreationTimestamp:2020-11-22 23:50:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-11-22 23:50:15 +0000 UTC 2020-11-22 23:50:02 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-11-22 23:50:16 +0000 UTC 2020-11-22 23:50:16 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Nov 22 23:50:16.933: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-6691,SelfLink:/apis/apps/v1/namespaces/deployment-6691/replicasets/nginx-deployment-55fb7cb77f,UID:57dbd4f1-9e28-401f-9417-d7e97994d7cc,ResourceVersion:10994323,Generation:3,CreationTimestamp:2020-11-22 23:50:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment b3c319a8-a0d9-43e3-9923-2d9053ea3bc2 0xc000557527 0xc000557528}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Nov 22 23:50:16.933: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Nov 22 23:50:16.934: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-6691,SelfLink:/apis/apps/v1/namespaces/deployment-6691/replicasets/nginx-deployment-7b8c6f4498,UID:78fa7ccb-3b13-4f36-96ec-49296418eb9a,ResourceVersion:10994368,Generation:3,CreationTimestamp:2020-11-22 23:50:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment b3c319a8-a0d9-43e3-9923-2d9053ea3bc2 0xc000557687 0xc000557688}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Nov 22 23:50:17.007: INFO: Pod "nginx-deployment-55fb7cb77f-4zv8j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-4zv8j,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-55fb7cb77f-4zv8j,UID:b70bd377-1ba1-49db-97f6-9bab4b7c8f02,ResourceVersion:10994367,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 57dbd4f1-9e28-401f-9417-d7e97994d7cc 0xc000571727 0xc000571728}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000571c30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000571d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.007: INFO: Pod "nginx-deployment-55fb7cb77f-56w2w" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-56w2w,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-55fb7cb77f-56w2w,UID:37a7467b-203b-4b71-9a5c-dab648a1e67c,ResourceVersion:10994369,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 57dbd4f1-9e28-401f-9417-d7e97994d7cc 0xc000328207 0xc000328208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0003295b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000329dc0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.008: INFO: Pod "nginx-deployment-55fb7cb77f-79m5d" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-79m5d,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-55fb7cb77f-79m5d,UID:dc426340-ba55-4212-b7fd-562c5290eec6,ResourceVersion:10994332,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 57dbd4f1-9e28-401f-9417-d7e97994d7cc 0xc000380af7 0xc000380af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000381b10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000381b30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.008: INFO: Pod "nginx-deployment-55fb7cb77f-7kwn4" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7kwn4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-55fb7cb77f-7kwn4,UID:021b884f-0b8a-4c5b-9f1d-8b4715ac97d1,ResourceVersion:10994298,Generation:0,CreationTimestamp:2020-11-22 23:50:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 57dbd4f1-9e28-401f-9417-d7e97994d7cc 0xc000381cd7 0xc000381cd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001ade0f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001ade110}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-22 23:50:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.008: INFO: Pod "nginx-deployment-55fb7cb77f-9gd7b" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9gd7b,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-55fb7cb77f-9gd7b,UID:1798178f-3c9d-4446-91b3-d9502a48e6ad,ResourceVersion:10994365,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 57dbd4f1-9e28-401f-9417-d7e97994d7cc 0xc001ade987 0xc001ade988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001adea00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001adea20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.008: INFO: Pod "nginx-deployment-55fb7cb77f-d2qq9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-d2qq9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-55fb7cb77f-d2qq9,UID:d55f5f52-c089-4f12-89db-ace87b49cc78,ResourceVersion:10994311,Generation:0,CreationTimestamp:2020-11-22 23:50:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 57dbd4f1-9e28-401f-9417-d7e97994d7cc 0xc001adeaa7 0xc001adeaa8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001adeb80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001adeba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-22 23:50:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.008: INFO: Pod "nginx-deployment-55fb7cb77f-hjm99" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hjm99,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-55fb7cb77f-hjm99,UID:876d43d9-54a2-4d3d-8bb7-fce1a73a0e33,ResourceVersion:10994363,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 57dbd4f1-9e28-401f-9417-d7e97994d7cc 0xc001adee37 0xc001adee38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001adf050} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001adf070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.008: INFO: Pod "nginx-deployment-55fb7cb77f-j6mzr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-j6mzr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-55fb7cb77f-j6mzr,UID:f0381a92-1178-4588-972b-68c3e0a81448,ResourceVersion:10994375,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 57dbd4f1-9e28-401f-9417-d7e97994d7cc 0xc001adf247 0xc001adf248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001adf440} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001adf460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.008: INFO: Pod "nginx-deployment-55fb7cb77f-kc9w8" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kc9w8,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-55fb7cb77f-kc9w8,UID:9d5443c0-18fc-439c-8bf9-47088516011b,ResourceVersion:10994284,Generation:0,CreationTimestamp:2020-11-22 23:50:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 57dbd4f1-9e28-401f-9417-d7e97994d7cc 0xc001adf4e7 0xc001adf4e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001adf580} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001adf5a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-22 23:50:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.009: INFO: Pod "nginx-deployment-55fb7cb77f-lml7c" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lml7c,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-55fb7cb77f-lml7c,UID:d923a36f-be43-4561-ba8b-dd57c79646b6,ResourceVersion:10994349,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 57dbd4f1-9e28-401f-9417-d7e97994d7cc 0xc001adf677 0xc001adf678}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001adf720} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001adf750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.009: INFO: Pod "nginx-deployment-55fb7cb77f-rvbdq" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rvbdq,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-55fb7cb77f-rvbdq,UID:618c50d2-9fa5-41f1-aeb6-098311334c74,ResourceVersion:10994348,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 57dbd4f1-9e28-401f-9417-d7e97994d7cc 0xc001adf7d7 0xc001adf7d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001adf850} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001adf870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.009: INFO: Pod "nginx-deployment-55fb7cb77f-s85b9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-s85b9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-55fb7cb77f-s85b9,UID:d0cffd3b-9243-447d-a0b1-c662a2596f12,ResourceVersion:10994314,Generation:0,CreationTimestamp:2020-11-22 23:50:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 57dbd4f1-9e28-401f-9417-d7e97994d7cc 0xc001adfa87 0xc001adfa88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001adfc60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001adfd20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:15 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-22 23:50:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.009: INFO: Pod "nginx-deployment-55fb7cb77f-vpqjz" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vpqjz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-55fb7cb77f-vpqjz,UID:c4151fb2-5273-404e-9704-a46dd6aa1473,ResourceVersion:10994293,Generation:0,CreationTimestamp:2020-11-22 23:50:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 57dbd4f1-9e28-401f-9417-d7e97994d7cc 0xc002322087 0xc002322088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002322100} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002322120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:14 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-22 23:50:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.009: INFO: Pod "nginx-deployment-7b8c6f4498-62dkg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-62dkg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-62dkg,UID:0548687c-0ce5-4c48-abe1-92a12fe878b6,ResourceVersion:10994226,Generation:0,CreationTimestamp:2020-11-22 23:50:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc002322287 0xc002322288}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023223a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023223d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.46,StartTime:2020-11-22 23:50:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-22 23:50:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://922af9fd3a972456afaa3b9dfab1e3e5566cfd93c72fa709354a4b6ee4a9b533}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.009: INFO: Pod "nginx-deployment-7b8c6f4498-6s6kx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6s6kx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-6s6kx,UID:d053feb1-5c22-4cf7-ac6b-4d11b3ceed10,ResourceVersion:10994357,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc0023226f7 0xc0023226f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002322800} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002322830}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.009: INFO: Pod "nginx-deployment-7b8c6f4498-c5gfh" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-c5gfh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-c5gfh,UID:39e0f507-177e-4439-a036-bb213c776c12,ResourceVersion:10994254,Generation:0,CreationTimestamp:2020-11-22 23:50:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc0023228b7 0xc0023228b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002322930} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002322960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.47,StartTime:2020-11-22 23:50:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-22 23:50:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://190a90b25072cce517115cc29975f1afb22dacb9ef0cbc42ca01cc9049d18c6a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.009: INFO: Pod "nginx-deployment-7b8c6f4498-dbn52" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dbn52,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-dbn52,UID:29bee017-cb06-48d0-81a0-50000af853d0,ResourceVersion:10994340,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc002322af7 0xc002322af8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002322b80} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002322ba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.010: INFO: Pod "nginx-deployment-7b8c6f4498-dptbf" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-dptbf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-dptbf,UID:eec76aab-0f66-44ca-8ff4-6440bbe5fee4,ResourceVersion:10994360,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc002322c47 0xc002322c48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002322cc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002322cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.010: INFO: Pod "nginx-deployment-7b8c6f4498-frm7m" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-frm7m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-frm7m,UID:c3d01de3-b0f2-4be9-9457-3b870409f198,ResourceVersion:10994197,Generation:0,CreationTimestamp:2020-11-22 23:50:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc002322d97 0xc002322d98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002322e10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002322e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:07 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:07 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.44,StartTime:2020-11-22 23:50:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-22 23:50:06 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://0ab2fe3c729fd9d186b374150c504bca6c2c0c20cf106281f31c1dfa658699e8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.010: INFO: Pod "nginx-deployment-7b8c6f4498-fwf5h" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fwf5h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-fwf5h,UID:4062460d-138d-4097-95c2-bd38f470d0c5,ResourceVersion:10994248,Generation:0,CreationTimestamp:2020-11-22 23:50:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc002323037 0xc002323038}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023230b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023230d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.89,StartTime:2020-11-22 23:50:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-22 23:50:12 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7bf64880040c1f19029a7b623b5d6a35cc6871558370b58ec833293a1cb91aee}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.010: INFO: Pod "nginx-deployment-7b8c6f4498-jlj2g" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jlj2g,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-jlj2g,UID:faa789f2-b0e0-43b8-886a-8784dfcfdd58,ResourceVersion:10994205,Generation:0,CreationTimestamp:2020-11-22 23:50:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc0023231d7 0xc0023231d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002323250} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002323270}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:08 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:08 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.85,StartTime:2020-11-22 23:50:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-22 23:50:07 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://56edb48f121035b194c29f17f1921e0049e955583caaddbabe699c107890ee6b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.010: INFO: Pod "nginx-deployment-7b8c6f4498-jwk6k" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jwk6k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-jwk6k,UID:c50be670-995f-4917-bdd2-6f65f6a60858,ResourceVersion:10994361,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc002323347 0xc002323348}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023233c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023233e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.010: INFO: Pod "nginx-deployment-7b8c6f4498-n4ltp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-n4ltp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-n4ltp,UID:a6fc99fb-05e4-4020-9207-0571efe837cc,ResourceVersion:10994366,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc002323467 0xc002323468}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0023234f0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002323510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.010: INFO: Pod "nginx-deployment-7b8c6f4498-q7xjt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-q7xjt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-q7xjt,UID:7566e493-3403-4375-80ed-5ef647e2a57c,ResourceVersion:10994376,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc002323597 0xc002323598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002323610} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002323630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:,StartTime:2020-11-22 23:50:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.010: INFO: Pod "nginx-deployment-7b8c6f4498-r4q69" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r4q69,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-r4q69,UID:2a9e69e5-d1e8-4913-812f-8c67d67f09e2,ResourceVersion:10994364,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc0023236f7 0xc0023236f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002323770} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002323790}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.010: INFO: Pod "nginx-deployment-7b8c6f4498-rdwfz" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rdwfz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-rdwfz,UID:9ce18f8b-d8c6-45d2-b0f3-78e2d0b71a85,ResourceVersion:10994213,Generation:0,CreationTimestamp:2020-11-22 23:50:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc002323817 0xc002323818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002323890} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0023238b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:09 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:09 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.6,PodIP:10.244.1.45,StartTime:2020-11-22 23:50:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-22 23:50:09 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://19c351a63a8693e3823a2b6c22a1640cd34a5d802346837c35e6d5df8b032e7c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.010: INFO: Pod "nginx-deployment-7b8c6f4498-rg4dt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rg4dt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-rg4dt,UID:45c6fe06-fe5f-474f-a9eb-7aefdb822cd8,ResourceVersion:10994351,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc002323987 0xc002323988}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002323a10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002323a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.011: INFO: Pod "nginx-deployment-7b8c6f4498-sdn58" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sdn58,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-sdn58,UID:1c2518e2-1882-40d6-8018-67c45ba11c89,ResourceVersion:10994227,Generation:0,CreationTimestamp:2020-11-22 23:50:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc002323ac7 0xc002323ac8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002323b40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002323b60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:11 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:11 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.86,StartTime:2020-11-22 23:50:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-22 23:50:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9715f5ad2d7ef96fb581aebabd8862ad19b03708ce8cb85c07cbd482a9707373}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.011: INFO: Pod "nginx-deployment-7b8c6f4498-sf7jr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sf7jr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-sf7jr,UID:5dbfb106-ab7c-4f11-aff5-998470b86d8f,ResourceVersion:10994242,Generation:0,CreationTimestamp:2020-11-22 23:50:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc002323c47 0xc002323c48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002323cd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002323cf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:12 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:12 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:02 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:10.244.2.87,StartTime:2020-11-22 23:50:02 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-11-22 23:50:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://61dad3617e3efd10769f99feb27ae742b885083c537b225e7c97949bbfe69760}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.011: INFO: Pod "nginx-deployment-7b8c6f4498-tnxw9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-tnxw9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-tnxw9,UID:91a56c88-80c2-4606-81f0-5bec841ad651,ResourceVersion:10994352,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc002323df7 0xc002323df8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001934020} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001934040}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.011: INFO: Pod "nginx-deployment-7b8c6f4498-vrw95" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vrw95,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-vrw95,UID:9694b8ce-804b-4d08-b9d7-47adb5aec590,ResourceVersion:10994344,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc0019340c7 0xc0019340c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001934160} {node.kubernetes.io/unreachable Exists  NoExecute 0xc001934180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.011: INFO: Pod "nginx-deployment-7b8c6f4498-x6546" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x6546,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-x6546,UID:a3c457d4-6902-4312-9867-ac6c546ea4d2,ResourceVersion:10994362,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc001934217 0xc001934218}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc001934290} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0019342b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Nov 22 23:50:17.011: INFO: Pod "nginx-deployment-7b8c6f4498-xnf5j" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xnf5j,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-6691,SelfLink:/api/v1/namespaces/deployment-6691/pods/nginx-deployment-7b8c6f4498-xnf5j,UID:1a1b5477-1d40-4573-8688-6bbf5c0e1adb,ResourceVersion:10994380,Generation:0,CreationTimestamp:2020-11-22 23:50:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 78fa7ccb-3b13-4f36-96ec-49296418eb9a 0xc001934337 0xc001934338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-zr8zp {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-zr8zp,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-zr8zp true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0019343b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0019343d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-11-22 23:50:16 +0000 UTC  }],Message:,Reason:,HostIP:172.18.0.5,PodIP:,StartTime:2020-11-22 23:50:16 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:50:17.011: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6691" for this suite.
Nov 22 23:50:37.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:50:37.549: INFO: namespace deployment-6691 deletion completed in 20.438885598s

• [SLOW TEST:35.321 seconds]
[sig-apps] Deployment
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:50:37.549: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 22 23:50:37.785: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8cc8f109-761b-4bcc-907e-c6c48d34785a" in namespace "downward-api-1635" to be "success or failure"
Nov 22 23:50:37.943: INFO: Pod "downwardapi-volume-8cc8f109-761b-4bcc-907e-c6c48d34785a": Phase="Pending", Reason="", readiness=false. Elapsed: 157.89495ms
Nov 22 23:50:39.947: INFO: Pod "downwardapi-volume-8cc8f109-761b-4bcc-907e-c6c48d34785a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161056441s
Nov 22 23:50:41.950: INFO: Pod "downwardapi-volume-8cc8f109-761b-4bcc-907e-c6c48d34785a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.164901423s
Nov 22 23:50:43.953: INFO: Pod "downwardapi-volume-8cc8f109-761b-4bcc-907e-c6c48d34785a": Phase="Running", Reason="", readiness=true. Elapsed: 6.167819749s
Nov 22 23:50:45.958: INFO: Pod "downwardapi-volume-8cc8f109-761b-4bcc-907e-c6c48d34785a": Phase="Running", Reason="", readiness=true. Elapsed: 8.172106219s
Nov 22 23:50:47.962: INFO: Pod "downwardapi-volume-8cc8f109-761b-4bcc-907e-c6c48d34785a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.176009127s
STEP: Saw pod success
Nov 22 23:50:47.962: INFO: Pod "downwardapi-volume-8cc8f109-761b-4bcc-907e-c6c48d34785a" satisfied condition "success or failure"
Nov 22 23:50:47.964: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-8cc8f109-761b-4bcc-907e-c6c48d34785a container client-container: 
STEP: delete the pod
Nov 22 23:50:48.123: INFO: Waiting for pod downwardapi-volume-8cc8f109-761b-4bcc-907e-c6c48d34785a to disappear
Nov 22 23:50:48.137: INFO: Pod downwardapi-volume-8cc8f109-761b-4bcc-907e-c6c48d34785a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:50:48.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1635" for this suite.
Nov 22 23:50:54.176: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:50:54.249: INFO: namespace downward-api-1635 deletion completed in 6.106958148s

• [SLOW TEST:16.700 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:50:54.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 22 23:50:54.313: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dfcc32cd-6d9e-4e47-b8bf-f9cc7579ce49" in namespace "downward-api-3145" to be "success or failure"
Nov 22 23:50:54.338: INFO: Pod "downwardapi-volume-dfcc32cd-6d9e-4e47-b8bf-f9cc7579ce49": Phase="Pending", Reason="", readiness=false. Elapsed: 24.840096ms
Nov 22 23:50:56.343: INFO: Pod "downwardapi-volume-dfcc32cd-6d9e-4e47-b8bf-f9cc7579ce49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030107542s
Nov 22 23:50:58.347: INFO: Pod "downwardapi-volume-dfcc32cd-6d9e-4e47-b8bf-f9cc7579ce49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034510437s
STEP: Saw pod success
Nov 22 23:50:58.347: INFO: Pod "downwardapi-volume-dfcc32cd-6d9e-4e47-b8bf-f9cc7579ce49" satisfied condition "success or failure"
Nov 22 23:50:58.351: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-dfcc32cd-6d9e-4e47-b8bf-f9cc7579ce49 container client-container: 
STEP: delete the pod
Nov 22 23:50:58.383: INFO: Waiting for pod downwardapi-volume-dfcc32cd-6d9e-4e47-b8bf-f9cc7579ce49 to disappear
Nov 22 23:50:58.394: INFO: Pod downwardapi-volume-dfcc32cd-6d9e-4e47-b8bf-f9cc7579ce49 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:50:58.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3145" for this suite.
Nov 22 23:51:04.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:51:04.499: INFO: namespace downward-api-3145 deletion completed in 6.100251771s

• [SLOW TEST:10.250 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:51:04.499: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-a506d2e6-018c-4a83-951f-d831d6d04a91
STEP: Creating a pod to test consume secrets
Nov 22 23:51:04.678: INFO: Waiting up to 5m0s for pod "pod-secrets-a4915b74-1835-4eec-9beb-292a72daf3fb" in namespace "secrets-8139" to be "success or failure"
Nov 22 23:51:04.682: INFO: Pod "pod-secrets-a4915b74-1835-4eec-9beb-292a72daf3fb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144853ms
Nov 22 23:51:06.775: INFO: Pod "pod-secrets-a4915b74-1835-4eec-9beb-292a72daf3fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097189253s
Nov 22 23:51:08.779: INFO: Pod "pod-secrets-a4915b74-1835-4eec-9beb-292a72daf3fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.10131045s
STEP: Saw pod success
Nov 22 23:51:08.779: INFO: Pod "pod-secrets-a4915b74-1835-4eec-9beb-292a72daf3fb" satisfied condition "success or failure"
Nov 22 23:51:08.782: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a4915b74-1835-4eec-9beb-292a72daf3fb container secret-volume-test: 
STEP: delete the pod
Nov 22 23:51:08.820: INFO: Waiting for pod pod-secrets-a4915b74-1835-4eec-9beb-292a72daf3fb to disappear
Nov 22 23:51:08.838: INFO: Pod pod-secrets-a4915b74-1835-4eec-9beb-292a72daf3fb no longer exists
[AfterEach] [sig-storage] Secrets
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:51:08.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8139" for this suite.
Nov 22 23:51:14.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:51:14.931: INFO: namespace secrets-8139 deletion completed in 6.090060577s
STEP: Destroying namespace "secret-namespace-9925" for this suite.
Nov 22 23:51:20.944: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:51:21.031: INFO: namespace secret-namespace-9925 deletion completed in 6.099727387s

• [SLOW TEST:16.532 seconds]
[sig-storage] Secrets
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:51:21.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1122 23:51:32.607874       6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Nov 22 23:51:32.607: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:51:32.607: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7764" for this suite.
Nov 22 23:51:40.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:51:40.893: INFO: namespace gc-7764 deletion completed in 8.282864354s

• [SLOW TEST:19.862 seconds]
[sig-api-machinery] Garbage collector
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:51:40.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-bef4d6eb-c7b3-438e-ab0c-a48efd126835
STEP: Creating a pod to test consume secrets
Nov 22 23:51:40.978: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-de3f2996-cf84-431f-9d45-cc4db440a27b" in namespace "projected-423" to be "success or failure"
Nov 22 23:51:40.988: INFO: Pod "pod-projected-secrets-de3f2996-cf84-431f-9d45-cc4db440a27b": Phase="Pending", Reason="", readiness=false. Elapsed: 9.706427ms
Nov 22 23:51:42.998: INFO: Pod "pod-projected-secrets-de3f2996-cf84-431f-9d45-cc4db440a27b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019934359s
Nov 22 23:51:45.003: INFO: Pod "pod-projected-secrets-de3f2996-cf84-431f-9d45-cc4db440a27b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024761702s
STEP: Saw pod success
Nov 22 23:51:45.003: INFO: Pod "pod-projected-secrets-de3f2996-cf84-431f-9d45-cc4db440a27b" satisfied condition "success or failure"
Nov 22 23:51:45.007: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-de3f2996-cf84-431f-9d45-cc4db440a27b container projected-secret-volume-test: 
STEP: delete the pod
Nov 22 23:51:45.026: INFO: Waiting for pod pod-projected-secrets-de3f2996-cf84-431f-9d45-cc4db440a27b to disappear
Nov 22 23:51:45.030: INFO: Pod pod-projected-secrets-de3f2996-cf84-431f-9d45-cc4db440a27b no longer exists
[AfterEach] [sig-storage] Projected secret
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:51:45.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-423" for this suite.
Nov 22 23:51:51.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:51:51.133: INFO: namespace projected-423 deletion completed in 6.099370516s

• [SLOW TEST:10.240 seconds]
[sig-storage] Projected secret
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:51:51.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Nov 22 23:51:51.206: INFO: Waiting up to 5m0s for pod "client-containers-f2d81cb4-056d-470a-a15b-0d39979776da" in namespace "containers-1061" to be "success or failure"
Nov 22 23:51:51.216: INFO: Pod "client-containers-f2d81cb4-056d-470a-a15b-0d39979776da": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0648ms
Nov 22 23:51:53.220: INFO: Pod "client-containers-f2d81cb4-056d-470a-a15b-0d39979776da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013893864s
Nov 22 23:51:55.224: INFO: Pod "client-containers-f2d81cb4-056d-470a-a15b-0d39979776da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018105858s
STEP: Saw pod success
Nov 22 23:51:55.224: INFO: Pod "client-containers-f2d81cb4-056d-470a-a15b-0d39979776da" satisfied condition "success or failure"
Nov 22 23:51:55.227: INFO: Trying to get logs from node iruya-worker2 pod client-containers-f2d81cb4-056d-470a-a15b-0d39979776da container test-container: 
STEP: delete the pod
Nov 22 23:51:55.340: INFO: Waiting for pod client-containers-f2d81cb4-056d-470a-a15b-0d39979776da to disappear
Nov 22 23:51:55.342: INFO: Pod client-containers-f2d81cb4-056d-470a-a15b-0d39979776da no longer exists
[AfterEach] [k8s.io] Docker Containers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:51:55.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1061" for this suite.
Nov 22 23:52:01.364: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:52:01.430: INFO: namespace containers-1061 deletion completed in 6.084365156s

• [SLOW TEST:10.296 seconds]
[k8s.io] Docker Containers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:52:01.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:52:06.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9298" for this suite.
Nov 22 23:52:13.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:52:13.169: INFO: namespace watch-9298 deletion completed in 6.182031896s

• [SLOW TEST:11.739 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:52:13.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-8571
I1122 23:52:13.229388       6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-8571, replica count: 1
I1122 23:52:14.279994       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1122 23:52:15.280285       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1122 23:52:16.280560       6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Nov 22 23:52:16.408: INFO: Created: latency-svc-nzzvb
Nov 22 23:52:16.426: INFO: Got endpoints: latency-svc-nzzvb [46.106736ms]
Nov 22 23:52:16.456: INFO: Created: latency-svc-s9768
Nov 22 23:52:16.469: INFO: Got endpoints: latency-svc-s9768 [42.184123ms]
Nov 22 23:52:16.513: INFO: Created: latency-svc-b4w8w
Nov 22 23:52:16.524: INFO: Got endpoints: latency-svc-b4w8w [97.000338ms]
Nov 22 23:52:16.541: INFO: Created: latency-svc-wdlwx
Nov 22 23:52:16.565: INFO: Got endpoints: latency-svc-wdlwx [138.169747ms]
Nov 22 23:52:16.588: INFO: Created: latency-svc-rf9rq
Nov 22 23:52:16.601: INFO: Got endpoints: latency-svc-rf9rq [174.6458ms]
Nov 22 23:52:16.656: INFO: Created: latency-svc-cf6zh
Nov 22 23:52:16.660: INFO: Got endpoints: latency-svc-cf6zh [232.815445ms]
Nov 22 23:52:16.696: INFO: Created: latency-svc-k97m5
Nov 22 23:52:16.710: INFO: Got endpoints: latency-svc-k97m5 [283.020137ms]
Nov 22 23:52:16.732: INFO: Created: latency-svc-4vh2k
Nov 22 23:52:16.740: INFO: Got endpoints: latency-svc-4vh2k [313.554042ms]
Nov 22 23:52:16.794: INFO: Created: latency-svc-rsdgb
Nov 22 23:52:16.797: INFO: Got endpoints: latency-svc-rsdgb [369.845501ms]
Nov 22 23:52:16.831: INFO: Created: latency-svc-vt4qn
Nov 22 23:52:16.855: INFO: Got endpoints: latency-svc-vt4qn [428.729947ms]
Nov 22 23:52:16.890: INFO: Created: latency-svc-cfbrw
Nov 22 23:52:16.955: INFO: Got endpoints: latency-svc-cfbrw [528.165681ms]
Nov 22 23:52:16.972: INFO: Created: latency-svc-8nx5c
Nov 22 23:52:16.981: INFO: Got endpoints: latency-svc-8nx5c [554.124091ms]
Nov 22 23:52:17.008: INFO: Created: latency-svc-22pqf
Nov 22 23:52:17.017: INFO: Got endpoints: latency-svc-22pqf [590.320444ms]
Nov 22 23:52:17.044: INFO: Created: latency-svc-5nxtd
Nov 22 23:52:17.093: INFO: Got endpoints: latency-svc-5nxtd [666.184838ms]
Nov 22 23:52:17.105: INFO: Created: latency-svc-vd7wh
Nov 22 23:52:17.120: INFO: Got endpoints: latency-svc-vd7wh [693.053435ms]
Nov 22 23:52:17.141: INFO: Created: latency-svc-jhj78
Nov 22 23:52:17.157: INFO: Got endpoints: latency-svc-jhj78 [729.983998ms]
Nov 22 23:52:17.177: INFO: Created: latency-svc-tsb2b
Nov 22 23:52:17.272: INFO: Got endpoints: latency-svc-tsb2b [803.689036ms]
Nov 22 23:52:17.275: INFO: Created: latency-svc-7x4wl
Nov 22 23:52:17.283: INFO: Got endpoints: latency-svc-7x4wl [758.9261ms]
Nov 22 23:52:17.320: INFO: Created: latency-svc-6x5cj
Nov 22 23:52:17.355: INFO: Got endpoints: latency-svc-6x5cj [789.69114ms]
Nov 22 23:52:17.423: INFO: Created: latency-svc-mmrfn
Nov 22 23:52:17.446: INFO: Got endpoints: latency-svc-mmrfn [844.894726ms]
Nov 22 23:52:17.478: INFO: Created: latency-svc-jgss7
Nov 22 23:52:17.493: INFO: Got endpoints: latency-svc-jgss7 [833.349033ms]
Nov 22 23:52:17.513: INFO: Created: latency-svc-9b8vh
Nov 22 23:52:17.554: INFO: Got endpoints: latency-svc-9b8vh [843.70119ms]
Nov 22 23:52:17.565: INFO: Created: latency-svc-s92vf
Nov 22 23:52:17.578: INFO: Got endpoints: latency-svc-s92vf [837.848794ms]
Nov 22 23:52:17.598: INFO: Created: latency-svc-7r6v4
Nov 22 23:52:17.609: INFO: Got endpoints: latency-svc-7r6v4 [812.291096ms]
Nov 22 23:52:17.631: INFO: Created: latency-svc-nfwzb
Nov 22 23:52:17.651: INFO: Got endpoints: latency-svc-nfwzb [795.40926ms]
Nov 22 23:52:17.709: INFO: Created: latency-svc-xq5fm
Nov 22 23:52:17.724: INFO: Got endpoints: latency-svc-xq5fm [769.097709ms]
Nov 22 23:52:17.752: INFO: Created: latency-svc-xm8b2
Nov 22 23:52:17.793: INFO: Got endpoints: latency-svc-xm8b2 [812.105791ms]
Nov 22 23:52:17.853: INFO: Created: latency-svc-7w7b8
Nov 22 23:52:17.896: INFO: Got endpoints: latency-svc-7w7b8 [878.928875ms]
Nov 22 23:52:17.897: INFO: Created: latency-svc-82vrn
Nov 22 23:52:17.910: INFO: Got endpoints: latency-svc-82vrn [816.809685ms]
Nov 22 23:52:17.939: INFO: Created: latency-svc-chjnv
Nov 22 23:52:17.952: INFO: Got endpoints: latency-svc-chjnv [831.95579ms]
Nov 22 23:52:17.997: INFO: Created: latency-svc-464vp
Nov 22 23:52:18.005: INFO: Got endpoints: latency-svc-464vp [848.580553ms]
Nov 22 23:52:18.058: INFO: Created: latency-svc-x5dnx
Nov 22 23:52:18.073: INFO: Got endpoints: latency-svc-x5dnx [800.363824ms]
Nov 22 23:52:18.093: INFO: Created: latency-svc-s5jfl
Nov 22 23:52:18.147: INFO: Got endpoints: latency-svc-s5jfl [864.1608ms]
Nov 22 23:52:18.165: INFO: Created: latency-svc-n765v
Nov 22 23:52:18.181: INFO: Got endpoints: latency-svc-n765v [826.072323ms]
Nov 22 23:52:18.202: INFO: Created: latency-svc-mmw65
Nov 22 23:52:18.211: INFO: Got endpoints: latency-svc-mmw65 [765.017339ms]
Nov 22 23:52:18.238: INFO: Created: latency-svc-6lkxz
Nov 22 23:52:18.272: INFO: Got endpoints: latency-svc-6lkxz [778.990596ms]
Nov 22 23:52:18.298: INFO: Created: latency-svc-76fw4
Nov 22 23:52:18.314: INFO: Got endpoints: latency-svc-76fw4 [760.373958ms]
Nov 22 23:52:18.339: INFO: Created: latency-svc-v4h8m
Nov 22 23:52:18.357: INFO: Got endpoints: latency-svc-v4h8m [778.64882ms]
Nov 22 23:52:18.410: INFO: Created: latency-svc-rwt9z
Nov 22 23:52:18.416: INFO: Got endpoints: latency-svc-rwt9z [806.923603ms]
Nov 22 23:52:18.453: INFO: Created: latency-svc-mt6ns
Nov 22 23:52:18.482: INFO: Got endpoints: latency-svc-mt6ns [831.289702ms]
Nov 22 23:52:18.502: INFO: Created: latency-svc-df5dt
Nov 22 23:52:18.560: INFO: Got endpoints: latency-svc-df5dt [835.595698ms]
Nov 22 23:52:18.598: INFO: Created: latency-svc-tt7z7
Nov 22 23:52:18.614: INFO: Got endpoints: latency-svc-tt7z7 [820.842651ms]
Nov 22 23:52:18.633: INFO: Created: latency-svc-d8w46
Nov 22 23:52:18.644: INFO: Got endpoints: latency-svc-d8w46 [748.01103ms]
Nov 22 23:52:18.704: INFO: Created: latency-svc-66l7z
Nov 22 23:52:18.706: INFO: Got endpoints: latency-svc-66l7z [796.285218ms]
Nov 22 23:52:18.734: INFO: Created: latency-svc-n5xl2
Nov 22 23:52:18.753: INFO: Got endpoints: latency-svc-n5xl2 [800.910683ms]
Nov 22 23:52:18.777: INFO: Created: latency-svc-zhmcj
Nov 22 23:52:18.789: INFO: Got endpoints: latency-svc-zhmcj [783.66145ms]
Nov 22 23:52:18.847: INFO: Created: latency-svc-v9d2s
Nov 22 23:52:18.855: INFO: Got endpoints: latency-svc-v9d2s [782.199617ms]
Nov 22 23:52:18.922: INFO: Created: latency-svc-tlsnz
Nov 22 23:52:18.940: INFO: Got endpoints: latency-svc-tlsnz [792.707323ms]
Nov 22 23:52:18.997: INFO: Created: latency-svc-pmj68
Nov 22 23:52:19.000: INFO: Got endpoints: latency-svc-pmj68 [819.015019ms]
Nov 22 23:52:19.048: INFO: Created: latency-svc-s6h9j
Nov 22 23:52:19.076: INFO: Got endpoints: latency-svc-s6h9j [864.695106ms]
Nov 22 23:52:19.147: INFO: Created: latency-svc-x6txs
Nov 22 23:52:19.156: INFO: Got endpoints: latency-svc-x6txs [883.580679ms]
Nov 22 23:52:19.180: INFO: Created: latency-svc-fgpt4
Nov 22 23:52:19.192: INFO: Got endpoints: latency-svc-fgpt4 [877.981119ms]
Nov 22 23:52:19.209: INFO: Created: latency-svc-f9hg7
Nov 22 23:52:19.223: INFO: Got endpoints: latency-svc-f9hg7 [865.77899ms]
Nov 22 23:52:19.290: INFO: Created: latency-svc-lkk8k
Nov 22 23:52:19.294: INFO: Got endpoints: latency-svc-lkk8k [877.458553ms]
Nov 22 23:52:19.329: INFO: Created: latency-svc-fvsz8
Nov 22 23:52:19.343: INFO: Got endpoints: latency-svc-fvsz8 [860.702437ms]
Nov 22 23:52:19.365: INFO: Created: latency-svc-fp4mm
Nov 22 23:52:19.373: INFO: Got endpoints: latency-svc-fp4mm [813.102076ms]
Nov 22 23:52:19.434: INFO: Created: latency-svc-mxkj9
Nov 22 23:52:19.455: INFO: Got endpoints: latency-svc-mxkj9 [840.661319ms]
Nov 22 23:52:19.455: INFO: Created: latency-svc-rjl4c
Nov 22 23:52:19.485: INFO: Got endpoints: latency-svc-rjl4c [840.650256ms]
Nov 22 23:52:19.515: INFO: Created: latency-svc-g8smz
Nov 22 23:52:19.525: INFO: Got endpoints: latency-svc-g8smz [819.082203ms]
Nov 22 23:52:19.566: INFO: Created: latency-svc-b44dz
Nov 22 23:52:19.570: INFO: Got endpoints: latency-svc-b44dz [817.387864ms]
Nov 22 23:52:19.602: INFO: Created: latency-svc-7z848
Nov 22 23:52:19.616: INFO: Got endpoints: latency-svc-7z848 [827.357739ms]
Nov 22 23:52:19.640: INFO: Created: latency-svc-td6zh
Nov 22 23:52:19.652: INFO: Got endpoints: latency-svc-td6zh [797.220654ms]
Nov 22 23:52:19.704: INFO: Created: latency-svc-pg994
Nov 22 23:52:19.737: INFO: Got endpoints: latency-svc-pg994 [796.82747ms]
Nov 22 23:52:19.737: INFO: Created: latency-svc-bq929
Nov 22 23:52:19.755: INFO: Got endpoints: latency-svc-bq929 [755.067138ms]
Nov 22 23:52:19.786: INFO: Created: latency-svc-qrdnw
Nov 22 23:52:19.859: INFO: Got endpoints: latency-svc-qrdnw [782.826354ms]
Nov 22 23:52:19.880: INFO: Created: latency-svc-rgfzz
Nov 22 23:52:19.894: INFO: Got endpoints: latency-svc-rgfzz [738.360448ms]
Nov 22 23:52:19.915: INFO: Created: latency-svc-7swjm
Nov 22 23:52:19.930: INFO: Got endpoints: latency-svc-7swjm [738.393675ms]
Nov 22 23:52:20.015: INFO: Created: latency-svc-lsg5j
Nov 22 23:52:20.041: INFO: Got endpoints: latency-svc-lsg5j [818.582698ms]
Nov 22 23:52:20.042: INFO: Created: latency-svc-b24kz
Nov 22 23:52:20.056: INFO: Got endpoints: latency-svc-b24kz [762.426673ms]
Nov 22 23:52:20.079: INFO: Created: latency-svc-gg5kh
Nov 22 23:52:20.102: INFO: Got endpoints: latency-svc-gg5kh [759.334345ms]
Nov 22 23:52:20.165: INFO: Created: latency-svc-68mfk
Nov 22 23:52:20.170: INFO: Got endpoints: latency-svc-68mfk [797.298353ms]
Nov 22 23:52:20.198: INFO: Created: latency-svc-9hlb2
Nov 22 23:52:20.213: INFO: Got endpoints: latency-svc-9hlb2 [758.05731ms]
Nov 22 23:52:20.246: INFO: Created: latency-svc-zksw2
Nov 22 23:52:20.261: INFO: Got endpoints: latency-svc-zksw2 [775.93244ms]
Nov 22 23:52:20.349: INFO: Created: latency-svc-fhfqw
Nov 22 23:52:20.363: INFO: Got endpoints: latency-svc-fhfqw [837.85096ms]
Nov 22 23:52:20.385: INFO: Created: latency-svc-7gdgq
Nov 22 23:52:20.393: INFO: Got endpoints: latency-svc-7gdgq [822.91841ms]
Nov 22 23:52:20.452: INFO: Created: latency-svc-l24pp
Nov 22 23:52:20.535: INFO: Got endpoints: latency-svc-l24pp [918.554158ms]
Nov 22 23:52:20.589: INFO: Created: latency-svc-pxfdb
Nov 22 23:52:20.604: INFO: Got endpoints: latency-svc-pxfdb [951.299423ms]
Nov 22 23:52:20.635: INFO: Created: latency-svc-xt2gs
Nov 22 23:52:20.652: INFO: Got endpoints: latency-svc-xt2gs [915.483223ms]
Nov 22 23:52:20.720: INFO: Created: latency-svc-k9kkf
Nov 22 23:52:20.737: INFO: Got endpoints: latency-svc-k9kkf [981.340806ms]
Nov 22 23:52:20.781: INFO: Created: latency-svc-7fq9c
Nov 22 23:52:20.797: INFO: Got endpoints: latency-svc-7fq9c [937.709213ms]
Nov 22 23:52:20.926: INFO: Created: latency-svc-jsm5x
Nov 22 23:52:20.943: INFO: Got endpoints: latency-svc-jsm5x [1.048364518s]
Nov 22 23:52:20.991: INFO: Created: latency-svc-rl6r9
Nov 22 23:52:21.013: INFO: Got endpoints: latency-svc-rl6r9 [1.082702516s]
Nov 22 23:52:21.106: INFO: Created: latency-svc-7h6bn
Nov 22 23:52:21.127: INFO: Got endpoints: latency-svc-7h6bn [1.085797719s]
Nov 22 23:52:21.158: INFO: Created: latency-svc-lttlw
Nov 22 23:52:21.218: INFO: Got endpoints: latency-svc-lttlw [1.162168383s]
Nov 22 23:52:21.220: INFO: Created: latency-svc-6w6fs
Nov 22 23:52:21.223: INFO: Got endpoints: latency-svc-6w6fs [1.120981351s]
Nov 22 23:52:21.249: INFO: Created: latency-svc-xr46x
Nov 22 23:52:21.278: INFO: Got endpoints: latency-svc-xr46x [1.107571024s]
Nov 22 23:52:21.392: INFO: Created: latency-svc-6l9wj
Nov 22 23:52:21.396: INFO: Got endpoints: latency-svc-6l9wj [1.183139604s]
Nov 22 23:52:21.454: INFO: Created: latency-svc-cwpfb
Nov 22 23:52:21.464: INFO: Got endpoints: latency-svc-cwpfb [1.20291797s]
Nov 22 23:52:21.483: INFO: Created: latency-svc-lv9br
Nov 22 23:52:21.524: INFO: Got endpoints: latency-svc-lv9br [1.16042676s]
Nov 22 23:52:21.536: INFO: Created: latency-svc-6sdv4
Nov 22 23:52:21.574: INFO: Got endpoints: latency-svc-6sdv4 [1.180424606s]
Nov 22 23:52:21.616: INFO: Created: latency-svc-n2l7n
Nov 22 23:52:21.658: INFO: Got endpoints: latency-svc-n2l7n [1.122425307s]
Nov 22 23:52:21.676: INFO: Created: latency-svc-6cvlx
Nov 22 23:52:21.687: INFO: Got endpoints: latency-svc-6cvlx [1.083200538s]
Nov 22 23:52:21.722: INFO: Created: latency-svc-cxcw7
Nov 22 23:52:21.742: INFO: Got endpoints: latency-svc-cxcw7 [1.08956418s]
Nov 22 23:52:21.836: INFO: Created: latency-svc-g4g5v
Nov 22 23:52:21.844: INFO: Got endpoints: latency-svc-g4g5v [1.106939118s]
Nov 22 23:52:21.885: INFO: Created: latency-svc-c5pzt
Nov 22 23:52:21.934: INFO: Got endpoints: latency-svc-c5pzt [1.137194709s]
Nov 22 23:52:22.001: INFO: Created: latency-svc-svn88
Nov 22 23:52:22.019: INFO: Got endpoints: latency-svc-svn88 [1.075843866s]
Nov 22 23:52:22.066: INFO: Created: latency-svc-rllb2
Nov 22 23:52:22.078: INFO: Got endpoints: latency-svc-rllb2 [1.065086014s]
Nov 22 23:52:22.129: INFO: Created: latency-svc-dgxj9
Nov 22 23:52:22.133: INFO: Got endpoints: latency-svc-dgxj9 [1.0053308s]
Nov 22 23:52:22.172: INFO: Created: latency-svc-llrvn
Nov 22 23:52:22.181: INFO: Got endpoints: latency-svc-llrvn [962.701263ms]
Nov 22 23:52:22.203: INFO: Created: latency-svc-l8gqn
Nov 22 23:52:22.221: INFO: Got endpoints: latency-svc-l8gqn [997.7623ms]
Nov 22 23:52:22.279: INFO: Created: latency-svc-b2j5c
Nov 22 23:52:22.282: INFO: Got endpoints: latency-svc-b2j5c [1.003849227s]
Nov 22 23:52:22.437: INFO: Created: latency-svc-8nc29
Nov 22 23:52:22.487: INFO: Got endpoints: latency-svc-8nc29 [1.09124564s]
Nov 22 23:52:22.628: INFO: Created: latency-svc-f5v66
Nov 22 23:52:22.662: INFO: Got endpoints: latency-svc-f5v66 [1.197650476s]
Nov 22 23:52:22.695: INFO: Created: latency-svc-5fjjz
Nov 22 23:52:22.710: INFO: Got endpoints: latency-svc-5fjjz [1.185884267s]
Nov 22 23:52:22.812: INFO: Created: latency-svc-gl6fm
Nov 22 23:52:22.830: INFO: Got endpoints: latency-svc-gl6fm [1.256543559s]
Nov 22 23:52:22.876: INFO: Created: latency-svc-lpm2d
Nov 22 23:52:22.896: INFO: Got endpoints: latency-svc-lpm2d [1.238819251s]
Nov 22 23:52:22.955: INFO: Created: latency-svc-c7n7c
Nov 22 23:52:22.982: INFO: Got endpoints: latency-svc-c7n7c [1.295334854s]
Nov 22 23:52:22.983: INFO: Created: latency-svc-b6jlh
Nov 22 23:52:23.007: INFO: Got endpoints: latency-svc-b6jlh [1.264793346s]
Nov 22 23:52:23.038: INFO: Created: latency-svc-wrxvc
Nov 22 23:52:23.047: INFO: Got endpoints: latency-svc-wrxvc [1.202965813s]
Nov 22 23:52:23.099: INFO: Created: latency-svc-m2mh4
Nov 22 23:52:23.102: INFO: Got endpoints: latency-svc-m2mh4 [1.168253342s]
Nov 22 23:52:23.134: INFO: Created: latency-svc-w7mpw
Nov 22 23:52:23.149: INFO: Got endpoints: latency-svc-w7mpw [1.130730067s]
Nov 22 23:52:23.170: INFO: Created: latency-svc-6bg28
Nov 22 23:52:23.236: INFO: Got endpoints: latency-svc-6bg28 [1.157809595s]
Nov 22 23:52:23.237: INFO: Created: latency-svc-pl6hk
Nov 22 23:52:23.240: INFO: Got endpoints: latency-svc-pl6hk [1.107066772s]
Nov 22 23:52:23.289: INFO: Created: latency-svc-6kkr8
Nov 22 23:52:23.314: INFO: Got endpoints: latency-svc-6kkr8 [1.132958615s]
Nov 22 23:52:23.375: INFO: Created: latency-svc-t8wnp
Nov 22 23:52:23.378: INFO: Got endpoints: latency-svc-t8wnp [1.156209289s]
Nov 22 23:52:23.469: INFO: Created: latency-svc-mvn9h
Nov 22 23:52:23.506: INFO: Got endpoints: latency-svc-mvn9h [1.224200012s]
Nov 22 23:52:23.529: INFO: Created: latency-svc-rv9sv
Nov 22 23:52:23.553: INFO: Got endpoints: latency-svc-rv9sv [1.065839788s]
Nov 22 23:52:23.597: INFO: Created: latency-svc-k6btz
Nov 22 23:52:23.643: INFO: Got endpoints: latency-svc-k6btz [981.374906ms]
Nov 22 23:52:23.673: INFO: Created: latency-svc-pk6k8
Nov 22 23:52:23.685: INFO: Got endpoints: latency-svc-pk6k8 [975.516421ms]
Nov 22 23:52:23.703: INFO: Created: latency-svc-s4qq2
Nov 22 23:52:23.716: INFO: Got endpoints: latency-svc-s4qq2 [885.267188ms]
Nov 22 23:52:23.770: INFO: Created: latency-svc-qrq8f
Nov 22 23:52:23.773: INFO: Got endpoints: latency-svc-qrq8f [876.216747ms]
Nov 22 23:52:23.822: INFO: Created: latency-svc-7s8qb
Nov 22 23:52:23.846: INFO: Got endpoints: latency-svc-7s8qb [863.631633ms]
Nov 22 23:52:23.909: INFO: Created: latency-svc-pc4tt
Nov 22 23:52:23.915: INFO: Got endpoints: latency-svc-pc4tt [908.346125ms]
Nov 22 23:52:23.979: INFO: Created: latency-svc-chm47
Nov 22 23:52:24.005: INFO: Got endpoints: latency-svc-chm47 [958.155758ms]
Nov 22 23:52:24.051: INFO: Created: latency-svc-nt6gn
Nov 22 23:52:24.090: INFO: Got endpoints: latency-svc-nt6gn [987.080853ms]
Nov 22 23:52:24.116: INFO: Created: latency-svc-hz5pf
Nov 22 23:52:24.164: INFO: Got endpoints: latency-svc-hz5pf [1.014800267s]
Nov 22 23:52:24.207: INFO: Created: latency-svc-ndkk4
Nov 22 23:52:24.221: INFO: Got endpoints: latency-svc-ndkk4 [984.816493ms]
Nov 22 23:52:24.296: INFO: Created: latency-svc-s7ssv
Nov 22 23:52:24.320: INFO: Got endpoints: latency-svc-s7ssv [1.079955862s]
Nov 22 23:52:24.321: INFO: Created: latency-svc-thg9v
Nov 22 23:52:24.335: INFO: Got endpoints: latency-svc-thg9v [1.021328002s]
Nov 22 23:52:24.362: INFO: Created: latency-svc-h6r8d
Nov 22 23:52:24.373: INFO: Got endpoints: latency-svc-h6r8d [995.366485ms]
Nov 22 23:52:24.950: INFO: Created: latency-svc-r86c2
Nov 22 23:52:24.954: INFO: Got endpoints: latency-svc-r86c2 [1.447520229s]
Nov 22 23:52:25.549: INFO: Created: latency-svc-fsfjn
Nov 22 23:52:25.559: INFO: Got endpoints: latency-svc-fsfjn [2.00519984s]
Nov 22 23:52:25.591: INFO: Created: latency-svc-ffzx2
Nov 22 23:52:25.614: INFO: Got endpoints: latency-svc-ffzx2 [1.971099865s]
Nov 22 23:52:25.646: INFO: Created: latency-svc-7nq2t
Nov 22 23:52:25.739: INFO: Got endpoints: latency-svc-7nq2t [2.053945866s]
Nov 22 23:52:25.764: INFO: Created: latency-svc-6zw8r
Nov 22 23:52:25.781: INFO: Got endpoints: latency-svc-6zw8r [2.065503318s]
Nov 22 23:52:25.800: INFO: Created: latency-svc-f9hwx
Nov 22 23:52:25.811: INFO: Got endpoints: latency-svc-f9hwx [2.038576792s]
Nov 22 23:52:25.830: INFO: Created: latency-svc-9clc6
Nov 22 23:52:25.865: INFO: Got endpoints: latency-svc-9clc6 [2.018726538s]
Nov 22 23:52:25.885: INFO: Created: latency-svc-n9kdm
Nov 22 23:52:25.902: INFO: Got endpoints: latency-svc-n9kdm [1.986761476s]
Nov 22 23:52:25.921: INFO: Created: latency-svc-ltbq7
Nov 22 23:52:25.939: INFO: Got endpoints: latency-svc-ltbq7 [1.934234911s]
Nov 22 23:52:26.015: INFO: Created: latency-svc-t4sn6
Nov 22 23:52:26.028: INFO: Got endpoints: latency-svc-t4sn6 [1.938001894s]
Nov 22 23:52:26.078: INFO: Created: latency-svc-kd7xb
Nov 22 23:52:26.095: INFO: Got endpoints: latency-svc-kd7xb [1.930757565s]
Nov 22 23:52:26.153: INFO: Created: latency-svc-jpzpk
Nov 22 23:52:26.173: INFO: Got endpoints: latency-svc-jpzpk [1.951750547s]
Nov 22 23:52:26.177: INFO: Created: latency-svc-nj8kn
Nov 22 23:52:26.185: INFO: Got endpoints: latency-svc-nj8kn [1.865183197s]
Nov 22 23:52:26.220: INFO: Created: latency-svc-8rr8v
Nov 22 23:52:26.234: INFO: Got endpoints: latency-svc-8rr8v [1.898475772s]
Nov 22 23:52:26.291: INFO: Created: latency-svc-22q5k
Nov 22 23:52:26.304: INFO: Got endpoints: latency-svc-22q5k [1.930817601s]
Nov 22 23:52:26.335: INFO: Created: latency-svc-c6sfh
Nov 22 23:52:26.349: INFO: Got endpoints: latency-svc-c6sfh [1.394872574s]
Nov 22 23:52:26.430: INFO: Created: latency-svc-bcltr
Nov 22 23:52:26.455: INFO: Got endpoints: latency-svc-bcltr [896.20496ms]
Nov 22 23:52:26.456: INFO: Created: latency-svc-bm954
Nov 22 23:52:26.469: INFO: Got endpoints: latency-svc-bm954 [854.346319ms]
Nov 22 23:52:26.490: INFO: Created: latency-svc-t2dph
Nov 22 23:52:26.511: INFO: Got endpoints: latency-svc-t2dph [771.728642ms]
Nov 22 23:52:26.584: INFO: Created: latency-svc-pkch9
Nov 22 23:52:26.602: INFO: Got endpoints: latency-svc-pkch9 [821.020141ms]
Nov 22 23:52:26.642: INFO: Created: latency-svc-lqnrs
Nov 22 23:52:26.656: INFO: Got endpoints: latency-svc-lqnrs [844.282749ms]
Nov 22 23:52:26.677: INFO: Created: latency-svc-cbzlf
Nov 22 23:52:26.735: INFO: Got endpoints: latency-svc-cbzlf [870.206933ms]
Nov 22 23:52:26.766: INFO: Created: latency-svc-vkf9t
Nov 22 23:52:26.776: INFO: Got endpoints: latency-svc-vkf9t [873.970672ms]
Nov 22 23:52:26.796: INFO: Created: latency-svc-57sg6
Nov 22 23:52:26.806: INFO: Got endpoints: latency-svc-57sg6 [867.065776ms]
Nov 22 23:52:26.847: INFO: Created: latency-svc-gfg7b
Nov 22 23:52:26.868: INFO: Got endpoints: latency-svc-gfg7b [840.692898ms]
Nov 22 23:52:26.869: INFO: Created: latency-svc-ntnr7
Nov 22 23:52:26.885: INFO: Got endpoints: latency-svc-ntnr7 [789.896061ms]
Nov 22 23:52:26.905: INFO: Created: latency-svc-v7dzd
Nov 22 23:52:26.921: INFO: Got endpoints: latency-svc-v7dzd [747.995765ms]
Nov 22 23:52:26.941: INFO: Created: latency-svc-rz4x2
Nov 22 23:52:26.973: INFO: Got endpoints: latency-svc-rz4x2 [787.806649ms]
Nov 22 23:52:26.993: INFO: Created: latency-svc-z7bcr
Nov 22 23:52:27.018: INFO: Got endpoints: latency-svc-z7bcr [783.888884ms]
Nov 22 23:52:27.054: INFO: Created: latency-svc-4bttl
Nov 22 23:52:27.117: INFO: Got endpoints: latency-svc-4bttl [812.907006ms]
Nov 22 23:52:27.153: INFO: Created: latency-svc-lsrtm
Nov 22 23:52:27.168: INFO: Got endpoints: latency-svc-lsrtm [819.342817ms]
Nov 22 23:52:27.191: INFO: Created: latency-svc-2brzn
Nov 22 23:52:27.204: INFO: Got endpoints: latency-svc-2brzn [749.093446ms]
Nov 22 23:52:27.261: INFO: Created: latency-svc-8k27d
Nov 22 23:52:27.263: INFO: Got endpoints: latency-svc-8k27d [794.136702ms]
Nov 22 23:52:27.325: INFO: Created: latency-svc-xnjqw
Nov 22 23:52:27.343: INFO: Got endpoints: latency-svc-xnjqw [831.744257ms]
Nov 22 23:52:27.399: INFO: Created: latency-svc-5v7gt
Nov 22 23:52:27.420: INFO: Got endpoints: latency-svc-5v7gt [817.154835ms]
Nov 22 23:52:27.420: INFO: Created: latency-svc-52hz4
Nov 22 23:52:27.433: INFO: Got endpoints: latency-svc-52hz4 [777.633861ms]
Nov 22 23:52:27.456: INFO: Created: latency-svc-h5wqt
Nov 22 23:52:27.480: INFO: Got endpoints: latency-svc-h5wqt [744.231875ms]
Nov 22 23:52:27.548: INFO: Created: latency-svc-2qdh2
Nov 22 23:52:27.550: INFO: Got endpoints: latency-svc-2qdh2 [774.248804ms]
Nov 22 23:52:27.577: INFO: Created: latency-svc-z5kf9
Nov 22 23:52:27.603: INFO: Got endpoints: latency-svc-z5kf9 [796.63634ms]
Nov 22 23:52:27.643: INFO: Created: latency-svc-nc5qc
Nov 22 23:52:27.680: INFO: Got endpoints: latency-svc-nc5qc [810.969679ms]
Nov 22 23:52:27.696: INFO: Created: latency-svc-vz4qx
Nov 22 23:52:27.717: INFO: Got endpoints: latency-svc-vz4qx [831.695576ms]
Nov 22 23:52:27.738: INFO: Created: latency-svc-bhp2x
Nov 22 23:52:27.747: INFO: Got endpoints: latency-svc-bhp2x [825.759641ms]
Nov 22 23:52:27.768: INFO: Created: latency-svc-kbnwt
Nov 22 23:52:27.778: INFO: Got endpoints: latency-svc-kbnwt [804.929031ms]
Nov 22 23:52:27.824: INFO: Created: latency-svc-sjx8x
Nov 22 23:52:27.825: INFO: Got endpoints: latency-svc-sjx8x [807.460937ms]
Nov 22 23:52:27.877: INFO: Created: latency-svc-5czmf
Nov 22 23:52:27.907: INFO: Got endpoints: latency-svc-5czmf [789.813801ms]
Nov 22 23:52:27.967: INFO: Created: latency-svc-jjpjh
Nov 22 23:52:27.971: INFO: Got endpoints: latency-svc-jjpjh [803.263949ms]
Nov 22 23:52:27.996: INFO: Created: latency-svc-dnkf5
Nov 22 23:52:28.013: INFO: Got endpoints: latency-svc-dnkf5 [808.409986ms]
Nov 22 23:52:28.111: INFO: Created: latency-svc-txgng
Nov 22 23:52:28.118: INFO: Got endpoints: latency-svc-txgng [854.525177ms]
Nov 22 23:52:28.163: INFO: Created: latency-svc-fmqq4
Nov 22 23:52:28.181: INFO: Got endpoints: latency-svc-fmqq4 [837.870759ms]
Nov 22 23:52:28.199: INFO: Created: latency-svc-bjxnk
Nov 22 23:52:28.273: INFO: Got endpoints: latency-svc-bjxnk [853.285206ms]
Nov 22 23:52:28.321: INFO: Created: latency-svc-pdtcf
Nov 22 23:52:28.338: INFO: Got endpoints: latency-svc-pdtcf [904.264793ms]
Nov 22 23:52:28.357: INFO: Created: latency-svc-jgwlr
Nov 22 23:52:28.398: INFO: Got endpoints: latency-svc-jgwlr [918.302687ms]
Nov 22 23:52:28.410: INFO: Created: latency-svc-k8x7b
Nov 22 23:52:28.434: INFO: Got endpoints: latency-svc-k8x7b [883.675246ms]
Nov 22 23:52:28.457: INFO: Created: latency-svc-g4qrq
Nov 22 23:52:28.470: INFO: Got endpoints: latency-svc-g4qrq [867.267694ms]
Nov 22 23:52:28.537: INFO: Created: latency-svc-nlznk
Nov 22 23:52:28.540: INFO: Got endpoints: latency-svc-nlznk [859.973903ms]
Nov 22 23:52:28.608: INFO: Created: latency-svc-qqfvx
Nov 22 23:52:28.692: INFO: Got endpoints: latency-svc-qqfvx [974.932212ms]
Nov 22 23:52:28.734: INFO: Created: latency-svc-cbkxq
Nov 22 23:52:28.746: INFO: Got endpoints: latency-svc-cbkxq [998.594315ms]
Nov 22 23:52:28.769: INFO: Created: latency-svc-vntvw
Nov 22 23:52:28.782: INFO: Got endpoints: latency-svc-vntvw [1.003760929s]
Nov 22 23:52:28.830: INFO: Created: latency-svc-7fczf
Nov 22 23:52:28.832: INFO: Got endpoints: latency-svc-7fczf [1.006742219s]
Nov 22 23:52:28.860: INFO: Created: latency-svc-qtjt2
Nov 22 23:52:28.873: INFO: Got endpoints: latency-svc-qtjt2 [965.920757ms]
Nov 22 23:52:28.897: INFO: Created: latency-svc-pp76r
Nov 22 23:52:28.909: INFO: Got endpoints: latency-svc-pp76r [937.531227ms]
Nov 22 23:52:28.968: INFO: Created: latency-svc-4kcmm
Nov 22 23:52:28.972: INFO: Got endpoints: latency-svc-4kcmm [959.240027ms]
Nov 22 23:52:29.039: INFO: Created: latency-svc-wls26
Nov 22 23:52:29.059: INFO: Got endpoints: latency-svc-wls26 [941.863232ms]
Nov 22 23:52:29.100: INFO: Created: latency-svc-d664g
Nov 22 23:52:29.108: INFO: Got endpoints: latency-svc-d664g [927.083799ms]
Nov 22 23:52:29.160: INFO: Created: latency-svc-dfn9p
Nov 22 23:52:29.174: INFO: Got endpoints: latency-svc-dfn9p [900.748942ms]
Nov 22 23:52:29.197: INFO: Created: latency-svc-lb5kn
Nov 22 23:52:29.231: INFO: Got endpoints: latency-svc-lb5kn [892.726037ms]
Nov 22 23:52:29.244: INFO: Created: latency-svc-4xgcg
Nov 22 23:52:29.258: INFO: Got endpoints: latency-svc-4xgcg [860.451689ms]
Nov 22 23:52:29.285: INFO: Created: latency-svc-sqtcj
Nov 22 23:52:29.301: INFO: Got endpoints: latency-svc-sqtcj [867.198542ms]
Nov 22 23:52:29.321: INFO: Created: latency-svc-rw6sd
Nov 22 23:52:29.356: INFO: Got endpoints: latency-svc-rw6sd [885.849981ms]
Nov 22 23:52:29.369: INFO: Created: latency-svc-v4hws
Nov 22 23:52:29.400: INFO: Got endpoints: latency-svc-v4hws [860.246806ms]
Nov 22 23:52:29.430: INFO: Created: latency-svc-6dzfj
Nov 22 23:52:29.439: INFO: Got endpoints: latency-svc-6dzfj [747.421572ms]
Nov 22 23:52:29.439: INFO: Latencies: [42.184123ms 97.000338ms 138.169747ms 174.6458ms 232.815445ms 283.020137ms 313.554042ms 369.845501ms 428.729947ms 528.165681ms 554.124091ms 590.320444ms 666.184838ms 693.053435ms 729.983998ms 738.360448ms 738.393675ms 744.231875ms 747.421572ms 747.995765ms 748.01103ms 749.093446ms 755.067138ms 758.05731ms 758.9261ms 759.334345ms 760.373958ms 762.426673ms 765.017339ms 769.097709ms 771.728642ms 774.248804ms 775.93244ms 777.633861ms 778.64882ms 778.990596ms 782.199617ms 782.826354ms 783.66145ms 783.888884ms 787.806649ms 789.69114ms 789.813801ms 789.896061ms 792.707323ms 794.136702ms 795.40926ms 796.285218ms 796.63634ms 796.82747ms 797.220654ms 797.298353ms 800.363824ms 800.910683ms 803.263949ms 803.689036ms 804.929031ms 806.923603ms 807.460937ms 808.409986ms 810.969679ms 812.105791ms 812.291096ms 812.907006ms 813.102076ms 816.809685ms 817.154835ms 817.387864ms 818.582698ms 819.015019ms 819.082203ms 819.342817ms 820.842651ms 821.020141ms 822.91841ms 825.759641ms 826.072323ms 827.357739ms 831.289702ms 831.695576ms 831.744257ms 831.95579ms 833.349033ms 835.595698ms 837.848794ms 837.85096ms 837.870759ms 840.650256ms 840.661319ms 840.692898ms 843.70119ms 844.282749ms 844.894726ms 848.580553ms 853.285206ms 854.346319ms 854.525177ms 859.973903ms 860.246806ms 860.451689ms 860.702437ms 863.631633ms 864.1608ms 864.695106ms 865.77899ms 867.065776ms 867.198542ms 867.267694ms 870.206933ms 873.970672ms 876.216747ms 877.458553ms 877.981119ms 878.928875ms 883.580679ms 883.675246ms 885.267188ms 885.849981ms 892.726037ms 896.20496ms 900.748942ms 904.264793ms 908.346125ms 915.483223ms 918.302687ms 918.554158ms 927.083799ms 937.531227ms 937.709213ms 941.863232ms 951.299423ms 958.155758ms 959.240027ms 962.701263ms 965.920757ms 974.932212ms 975.516421ms 981.340806ms 981.374906ms 984.816493ms 987.080853ms 995.366485ms 997.7623ms 998.594315ms 1.003760929s 1.003849227s 1.0053308s 1.006742219s 1.014800267s 1.021328002s 1.048364518s 1.065086014s 1.065839788s 1.075843866s 1.079955862s 1.082702516s 1.083200538s 1.085797719s 1.08956418s 1.09124564s 1.106939118s 1.107066772s 1.107571024s 1.120981351s 1.122425307s 1.130730067s 1.132958615s 1.137194709s 1.156209289s 1.157809595s 1.16042676s 1.162168383s 1.168253342s 1.180424606s 1.183139604s 1.185884267s 1.197650476s 1.20291797s 1.202965813s 1.224200012s 1.238819251s 1.256543559s 1.264793346s 1.295334854s 1.394872574s 1.447520229s 1.865183197s 1.898475772s 1.930757565s 1.930817601s 1.934234911s 1.938001894s 1.951750547s 1.971099865s 1.986761476s 2.00519984s 2.018726538s 2.038576792s 2.053945866s 2.065503318s]
Nov 22 23:52:29.439: INFO: 50 %ile: 860.702437ms
Nov 22 23:52:29.439: INFO: 90 %ile: 1.238819251s
Nov 22 23:52:29.439: INFO: 99 %ile: 2.053945866s
Nov 22 23:52:29.439: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:52:29.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-8571" for this suite.
Nov 22 23:52:55.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:52:55.538: INFO: namespace svc-latency-8571 deletion completed in 26.091877096s

• [SLOW TEST:42.368 seconds]
[sig-network] Service endpoints latency
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:52:55.539: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 22 23:52:55.620: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b0bdd664-7efe-4f72-b00e-b0414089517e" in namespace "downward-api-695" to be "success or failure"
Nov 22 23:52:55.638: INFO: Pod "downwardapi-volume-b0bdd664-7efe-4f72-b00e-b0414089517e": Phase="Pending", Reason="", readiness=false. Elapsed: 18.074668ms
Nov 22 23:52:57.642: INFO: Pod "downwardapi-volume-b0bdd664-7efe-4f72-b00e-b0414089517e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02172255s
Nov 22 23:52:59.646: INFO: Pod "downwardapi-volume-b0bdd664-7efe-4f72-b00e-b0414089517e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026045222s
STEP: Saw pod success
Nov 22 23:52:59.646: INFO: Pod "downwardapi-volume-b0bdd664-7efe-4f72-b00e-b0414089517e" satisfied condition "success or failure"
Nov 22 23:52:59.649: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b0bdd664-7efe-4f72-b00e-b0414089517e container client-container: 
STEP: delete the pod
Nov 22 23:52:59.698: INFO: Waiting for pod downwardapi-volume-b0bdd664-7efe-4f72-b00e-b0414089517e to disappear
Nov 22 23:52:59.701: INFO: Pod downwardapi-volume-b0bdd664-7efe-4f72-b00e-b0414089517e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:52:59.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-695" for this suite.
Nov 22 23:53:05.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:53:05.792: INFO: namespace downward-api-695 deletion completed in 6.087573393s

• [SLOW TEST:10.253 seconds]
[sig-storage] Downward API volume
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:53:05.792: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 22 23:53:05.891: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fc729c23-426e-494c-bb2b-7ff493e267b8" in namespace "projected-3175" to be "success or failure"
Nov 22 23:53:05.896: INFO: Pod "downwardapi-volume-fc729c23-426e-494c-bb2b-7ff493e267b8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.582302ms
Nov 22 23:53:08.040: INFO: Pod "downwardapi-volume-fc729c23-426e-494c-bb2b-7ff493e267b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.149202072s
Nov 22 23:53:10.043: INFO: Pod "downwardapi-volume-fc729c23-426e-494c-bb2b-7ff493e267b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.152334186s
STEP: Saw pod success
Nov 22 23:53:10.043: INFO: Pod "downwardapi-volume-fc729c23-426e-494c-bb2b-7ff493e267b8" satisfied condition "success or failure"
Nov 22 23:53:10.046: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-fc729c23-426e-494c-bb2b-7ff493e267b8 container client-container: 
STEP: delete the pod
Nov 22 23:53:10.075: INFO: Waiting for pod downwardapi-volume-fc729c23-426e-494c-bb2b-7ff493e267b8 to disappear
Nov 22 23:53:10.094: INFO: Pod downwardapi-volume-fc729c23-426e-494c-bb2b-7ff493e267b8 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:53:10.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3175" for this suite.
Nov 22 23:53:16.119: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:53:16.210: INFO: namespace projected-3175 deletion completed in 6.112734357s

• [SLOW TEST:10.419 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:53:16.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:53:20.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5778" for this suite.
Nov 22 23:53:58.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:53:58.455: INFO: namespace kubelet-test-5778 deletion completed in 38.123187171s

• [SLOW TEST:42.244 seconds]
[k8s.io] Kubelet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:53:58.455: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Nov 22 23:53:58.559: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9152,SelfLink:/api/v1/namespaces/watch-9152/configmaps/e2e-watch-test-label-changed,UID:57f351fd-6330-4056-879c-e542b42378cb,ResourceVersion:10997020,Generation:0,CreationTimestamp:2020-11-22 23:53:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Nov 22 23:53:58.559: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9152,SelfLink:/api/v1/namespaces/watch-9152/configmaps/e2e-watch-test-label-changed,UID:57f351fd-6330-4056-879c-e542b42378cb,ResourceVersion:10997021,Generation:0,CreationTimestamp:2020-11-22 23:53:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Nov 22 23:53:58.560: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9152,SelfLink:/api/v1/namespaces/watch-9152/configmaps/e2e-watch-test-label-changed,UID:57f351fd-6330-4056-879c-e542b42378cb,ResourceVersion:10997022,Generation:0,CreationTimestamp:2020-11-22 23:53:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Nov 22 23:54:08.589: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9152,SelfLink:/api/v1/namespaces/watch-9152/configmaps/e2e-watch-test-label-changed,UID:57f351fd-6330-4056-879c-e542b42378cb,ResourceVersion:10997044,Generation:0,CreationTimestamp:2020-11-22 23:53:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Nov 22 23:54:08.589: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9152,SelfLink:/api/v1/namespaces/watch-9152/configmaps/e2e-watch-test-label-changed,UID:57f351fd-6330-4056-879c-e542b42378cb,ResourceVersion:10997045,Generation:0,CreationTimestamp:2020-11-22 23:53:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Nov 22 23:54:08.589: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-9152,SelfLink:/api/v1/namespaces/watch-9152/configmaps/e2e-watch-test-label-changed,UID:57f351fd-6330-4056-879c-e542b42378cb,ResourceVersion:10997046,Generation:0,CreationTimestamp:2020-11-22 23:53:58 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:54:08.589: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-9152" for this suite.
Nov 22 23:54:14.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:54:14.693: INFO: namespace watch-9152 deletion completed in 6.100268481s

• [SLOW TEST:16.238 seconds]
[sig-api-machinery] Watchers
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:54:14.693: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 22 23:54:14.755: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3c97c6fd-d935-4dd5-bb8a-e67df878b58e" in namespace "projected-3358" to be "success or failure"
Nov 22 23:54:14.766: INFO: Pod "downwardapi-volume-3c97c6fd-d935-4dd5-bb8a-e67df878b58e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.087349ms
Nov 22 23:54:16.770: INFO: Pod "downwardapi-volume-3c97c6fd-d935-4dd5-bb8a-e67df878b58e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015154484s
Nov 22 23:54:18.774: INFO: Pod "downwardapi-volume-3c97c6fd-d935-4dd5-bb8a-e67df878b58e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019214737s
STEP: Saw pod success
Nov 22 23:54:18.774: INFO: Pod "downwardapi-volume-3c97c6fd-d935-4dd5-bb8a-e67df878b58e" satisfied condition "success or failure"
Nov 22 23:54:18.777: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-3c97c6fd-d935-4dd5-bb8a-e67df878b58e container client-container: 
STEP: delete the pod
Nov 22 23:54:18.812: INFO: Waiting for pod downwardapi-volume-3c97c6fd-d935-4dd5-bb8a-e67df878b58e to disappear
Nov 22 23:54:18.826: INFO: Pod downwardapi-volume-3c97c6fd-d935-4dd5-bb8a-e67df878b58e no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:54:18.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3358" for this suite.
Nov 22 23:54:24.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:54:24.937: INFO: namespace projected-3358 deletion completed in 6.107531279s

• [SLOW TEST:10.244 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:54:24.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod test-webserver-b9d79838-7af3-4e16-af93-403222d939a9 in namespace container-probe-5173
Nov 22 23:54:29.039: INFO: Started pod test-webserver-b9d79838-7af3-4e16-af93-403222d939a9 in namespace container-probe-5173
STEP: checking the pod's current state and verifying that restartCount is present
Nov 22 23:54:29.041: INFO: Initial restart count of pod test-webserver-b9d79838-7af3-4e16-af93-403222d939a9 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 22 23:58:29.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5173" for this suite.
Nov 22 23:58:35.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 22 23:58:35.917: INFO: namespace container-probe-5173 deletion completed in 6.133118297s

• [SLOW TEST:250.980 seconds]
[k8s.io] Probing container
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 22 23:58:35.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-1820
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Nov 22 23:58:36.010: INFO: Found 0 stateful pods, waiting for 3
Nov 22 23:58:46.014: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Nov 22 23:58:46.014: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Nov 22 23:58:46.014: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Nov 22 23:58:46.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1820 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Nov 22 23:58:48.779: INFO: stderr: "I1122 23:58:48.651548    3956 log.go:172] (0xc00010cf20) (0xc0003028c0) Create stream\nI1122 23:58:48.651574    3956 log.go:172] (0xc00010cf20) (0xc0003028c0) Stream added, broadcasting: 1\nI1122 23:58:48.653777    3956 log.go:172] (0xc00010cf20) Reply frame received for 1\nI1122 23:58:48.653819    3956 log.go:172] (0xc00010cf20) (0xc000934000) Create stream\nI1122 23:58:48.653833    3956 log.go:172] (0xc00010cf20) (0xc000934000) Stream added, broadcasting: 3\nI1122 23:58:48.654994    3956 log.go:172] (0xc00010cf20) Reply frame received for 3\nI1122 23:58:48.655074    3956 log.go:172] (0xc00010cf20) (0xc00097e000) Create stream\nI1122 23:58:48.655107    3956 log.go:172] (0xc00010cf20) (0xc00097e000) Stream added, broadcasting: 5\nI1122 23:58:48.656078    3956 log.go:172] (0xc00010cf20) Reply frame received for 5\nI1122 23:58:48.734452    3956 log.go:172] (0xc00010cf20) Data frame received for 5\nI1122 23:58:48.734481    3956 log.go:172] (0xc00097e000) (5) Data frame handling\nI1122 23:58:48.734505    3956 log.go:172] (0xc00097e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1122 23:58:48.765086    3956 log.go:172] (0xc00010cf20) Data frame received for 3\nI1122 23:58:48.765121    3956 log.go:172] (0xc000934000) (3) Data frame handling\nI1122 23:58:48.765150    3956 log.go:172] (0xc000934000) (3) Data frame sent\nI1122 23:58:48.765372    3956 log.go:172] (0xc00010cf20) Data frame received for 3\nI1122 23:58:48.765405    3956 log.go:172] (0xc000934000) (3) Data frame handling\nI1122 23:58:48.765481    3956 log.go:172] (0xc00010cf20) Data frame received for 5\nI1122 23:58:48.765507    3956 log.go:172] (0xc00097e000) (5) Data frame handling\nI1122 23:58:48.767502    3956 log.go:172] (0xc00010cf20) Data frame received for 1\nI1122 23:58:48.767540    3956 log.go:172] (0xc0003028c0) (1) Data frame handling\nI1122 23:58:48.767574    3956 log.go:172] (0xc0003028c0) (1) Data frame sent\nI1122 23:58:48.767609    3956 log.go:172] (0xc00010cf20) (0xc0003028c0) Stream removed, broadcasting: 1\nI1122 23:58:48.767735    3956 log.go:172] (0xc00010cf20) Go away received\nI1122 23:58:48.768153    3956 log.go:172] (0xc00010cf20) (0xc0003028c0) Stream removed, broadcasting: 1\nI1122 23:58:48.768176    3956 log.go:172] (0xc00010cf20) (0xc000934000) Stream removed, broadcasting: 3\nI1122 23:58:48.768182    3956 log.go:172] (0xc00010cf20) (0xc00097e000) Stream removed, broadcasting: 5\n"
Nov 22 23:58:48.779: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Nov 22 23:58:48.779: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Nov 22 23:58:58.810: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Nov 22 23:59:08.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1820 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Nov 22 23:59:09.050: INFO: stderr: "I1122 23:59:08.971184    3990 log.go:172] (0xc000904370) (0xc0007466e0) Create stream\nI1122 23:59:08.971259    3990 log.go:172] (0xc000904370) (0xc0007466e0) Stream added, broadcasting: 1\nI1122 23:59:08.975428    3990 log.go:172] (0xc000904370) Reply frame received for 1\nI1122 23:59:08.975463    3990 log.go:172] (0xc000904370) (0xc000746000) Create stream\nI1122 23:59:08.975472    3990 log.go:172] (0xc000904370) (0xc000746000) Stream added, broadcasting: 3\nI1122 23:59:08.976408    3990 log.go:172] (0xc000904370) Reply frame received for 3\nI1122 23:59:08.976435    3990 log.go:172] (0xc000904370) (0xc0005fc320) Create stream\nI1122 23:59:08.976444    3990 log.go:172] (0xc000904370) (0xc0005fc320) Stream added, broadcasting: 5\nI1122 23:59:08.977449    3990 log.go:172] (0xc000904370) Reply frame received for 5\nI1122 23:59:09.042664    3990 log.go:172] (0xc000904370) Data frame received for 3\nI1122 23:59:09.042685    3990 log.go:172] (0xc000746000) (3) Data frame handling\nI1122 23:59:09.042692    3990 log.go:172] (0xc000746000) (3) Data frame sent\nI1122 23:59:09.042696    3990 log.go:172] (0xc000904370) Data frame received for 3\nI1122 23:59:09.042700    3990 log.go:172] (0xc000746000) (3) Data frame handling\nI1122 23:59:09.042911    3990 log.go:172] (0xc000904370) Data frame received for 5\nI1122 23:59:09.042936    3990 log.go:172] (0xc0005fc320) (5) Data frame handling\nI1122 23:59:09.042953    3990 log.go:172] (0xc0005fc320) (5) Data frame sent\nI1122 23:59:09.042967    3990 log.go:172] (0xc000904370) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1122 23:59:09.042979    3990 log.go:172] (0xc0005fc320) (5) Data frame handling\nI1122 23:59:09.044538    3990 log.go:172] (0xc000904370) Data frame received for 1\nI1122 23:59:09.044566    3990 log.go:172] (0xc0007466e0) (1) Data frame handling\nI1122 23:59:09.044587    3990 log.go:172] (0xc0007466e0) (1) Data frame sent\nI1122 23:59:09.044607    3990 log.go:172] (0xc000904370) (0xc0007466e0) Stream removed, broadcasting: 1\nI1122 23:59:09.044630    3990 log.go:172] (0xc000904370) Go away received\nI1122 23:59:09.045060    3990 log.go:172] (0xc000904370) (0xc0007466e0) Stream removed, broadcasting: 1\nI1122 23:59:09.045085    3990 log.go:172] (0xc000904370) (0xc000746000) Stream removed, broadcasting: 3\nI1122 23:59:09.045106    3990 log.go:172] (0xc000904370) (0xc0005fc320) Stream removed, broadcasting: 5\n"
Nov 22 23:59:09.050: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Nov 22 23:59:09.050: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Nov 22 23:59:19.070: INFO: Waiting for StatefulSet statefulset-1820/ss2 to complete update
Nov 22 23:59:19.070: INFO: Waiting for Pod statefulset-1820/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Nov 22 23:59:19.070: INFO: Waiting for Pod statefulset-1820/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Nov 22 23:59:29.078: INFO: Waiting for StatefulSet statefulset-1820/ss2 to complete update
Nov 22 23:59:29.078: INFO: Waiting for Pod statefulset-1820/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Rolling back to a previous revision
Nov 22 23:59:39.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1820 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Nov 22 23:59:39.414: INFO: stderr: "I1122 23:59:39.264574    4010 log.go:172] (0xc000138dc0) (0xc0002a6780) Create stream\nI1122 23:59:39.264634    4010 log.go:172] (0xc000138dc0) (0xc0002a6780) Stream added, broadcasting: 1\nI1122 23:59:39.267918    4010 log.go:172] (0xc000138dc0) Reply frame received for 1\nI1122 23:59:39.267964    4010 log.go:172] (0xc000138dc0) (0xc0009aa000) Create stream\nI1122 23:59:39.267999    4010 log.go:172] (0xc000138dc0) (0xc0009aa000) Stream added, broadcasting: 3\nI1122 23:59:39.270653    4010 log.go:172] (0xc000138dc0) Reply frame received for 3\nI1122 23:59:39.270708    4010 log.go:172] (0xc000138dc0) (0xc000582000) Create stream\nI1122 23:59:39.270729    4010 log.go:172] (0xc000138dc0) (0xc000582000) Stream added, broadcasting: 5\nI1122 23:59:39.273172    4010 log.go:172] (0xc000138dc0) Reply frame received for 5\nI1122 23:59:39.378353    4010 log.go:172] (0xc000138dc0) Data frame received for 5\nI1122 23:59:39.378378    4010 log.go:172] (0xc000582000) (5) Data frame handling\nI1122 23:59:39.378394    4010 log.go:172] (0xc000582000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI1122 23:59:39.404491    4010 log.go:172] (0xc000138dc0) Data frame received for 3\nI1122 23:59:39.404661    4010 log.go:172] (0xc0009aa000) (3) Data frame handling\nI1122 23:59:39.404742    4010 log.go:172] (0xc0009aa000) (3) Data frame sent\nI1122 23:59:39.405046    4010 log.go:172] (0xc000138dc0) Data frame received for 3\nI1122 23:59:39.405069    4010 log.go:172] (0xc0009aa000) (3) Data frame handling\nI1122 23:59:39.405317    4010 log.go:172] (0xc000138dc0) Data frame received for 5\nI1122 23:59:39.405344    4010 log.go:172] (0xc000582000) (5) Data frame handling\nI1122 23:59:39.407539    4010 log.go:172] (0xc000138dc0) Data frame received for 1\nI1122 23:59:39.407563    4010 log.go:172] (0xc0002a6780) (1) Data frame handling\nI1122 23:59:39.407575    4010 log.go:172] (0xc0002a6780) (1) Data frame sent\nI1122 23:59:39.407592    4010 log.go:172] (0xc000138dc0) (0xc0002a6780) Stream removed, broadcasting: 1\nI1122 23:59:39.407614    4010 log.go:172] (0xc000138dc0) Go away received\nI1122 23:59:39.408134    4010 log.go:172] (0xc000138dc0) (0xc0002a6780) Stream removed, broadcasting: 1\nI1122 23:59:39.408173    4010 log.go:172] (0xc000138dc0) (0xc0009aa000) Stream removed, broadcasting: 3\nI1122 23:59:39.408194    4010 log.go:172] (0xc000138dc0) (0xc000582000) Stream removed, broadcasting: 5\n"
Nov 22 23:59:39.414: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Nov 22 23:59:39.414: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Nov 22 23:59:49.447: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Nov 22 23:59:59.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1820 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Nov 22 23:59:59.821: INFO: stderr: "I1122 23:59:59.727838    4030 log.go:172] (0xc0006f8370) (0xc000632820) Create stream\nI1122 23:59:59.727903    4030 log.go:172] (0xc0006f8370) (0xc000632820) Stream added, broadcasting: 1\nI1122 23:59:59.730108    4030 log.go:172] (0xc0006f8370) Reply frame received for 1\nI1122 23:59:59.730144    4030 log.go:172] (0xc0006f8370) (0xc0006328c0) Create stream\nI1122 23:59:59.730163    4030 log.go:172] (0xc0006f8370) (0xc0006328c0) Stream added, broadcasting: 3\nI1122 23:59:59.730950    4030 log.go:172] (0xc0006f8370) Reply frame received for 3\nI1122 23:59:59.730987    4030 log.go:172] (0xc0006f8370) (0xc0007e2000) Create stream\nI1122 23:59:59.731002    4030 log.go:172] (0xc0006f8370) (0xc0007e2000) Stream added, broadcasting: 5\nI1122 23:59:59.731833    4030 log.go:172] (0xc0006f8370) Reply frame received for 5\nI1122 23:59:59.812820    4030 log.go:172] (0xc0006f8370) Data frame received for 5\nI1122 23:59:59.812987    4030 log.go:172] (0xc0007e2000) (5) Data frame handling\nI1122 23:59:59.813044    4030 log.go:172] (0xc0007e2000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI1122 23:59:59.813077    4030 log.go:172] (0xc0006f8370) Data frame received for 3\nI1122 23:59:59.813110    4030 log.go:172] (0xc0006328c0) (3) Data frame handling\nI1122 23:59:59.813121    4030 log.go:172] (0xc0006328c0) (3) Data frame sent\nI1122 23:59:59.813130    4030 log.go:172] (0xc0006f8370) Data frame received for 3\nI1122 23:59:59.813137    4030 log.go:172] (0xc0006328c0) (3) Data frame handling\nI1122 23:59:59.813165    4030 log.go:172] (0xc0006f8370) Data frame received for 5\nI1122 23:59:59.813180    4030 log.go:172] (0xc0007e2000) (5) Data frame handling\nI1122 23:59:59.814504    4030 log.go:172] (0xc0006f8370) Data frame received for 1\nI1122 23:59:59.814535    4030 log.go:172] (0xc000632820) (1) Data frame handling\nI1122 23:59:59.814566    4030 log.go:172] (0xc000632820) (1) Data frame sent\nI1122 23:59:59.814628    4030 log.go:172] (0xc0006f8370) (0xc000632820) Stream removed, broadcasting: 1\nI1122 23:59:59.814837    4030 log.go:172] (0xc0006f8370) Go away received\nI1122 23:59:59.815034    4030 log.go:172] (0xc0006f8370) (0xc000632820) Stream removed, broadcasting: 1\nI1122 23:59:59.815054    4030 log.go:172] (0xc0006f8370) (0xc0006328c0) Stream removed, broadcasting: 3\nI1122 23:59:59.815099    4030 log.go:172] (0xc0006f8370) (0xc0007e2000) Stream removed, broadcasting: 5\n"
Nov 22 23:59:59.821: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Nov 22 23:59:59.821: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Nov 23 00:00:19.840: INFO: Waiting for StatefulSet statefulset-1820/ss2 to complete update
Nov 23 00:00:19.840: INFO: Waiting for Pod statefulset-1820/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Nov 23 00:00:29.847: INFO: Deleting all statefulset in ns statefulset-1820
Nov 23 00:00:29.850: INFO: Scaling statefulset ss2 to 0
Nov 23 00:00:39.882: INFO: Waiting for statefulset status.replicas updated to 0
Nov 23 00:00:39.885: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 23 00:00:39.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1820" for this suite.
Nov 23 00:00:45.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 23 00:00:45.994: INFO: namespace statefulset-1820 deletion completed in 6.091716369s

• [SLOW TEST:130.076 seconds]
[sig-apps] StatefulSet
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 23 00:00:45.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Nov 23 00:00:46.055: INFO: Waiting up to 5m0s for pod "pod-8ba369e0-2846-4044-944c-68ec648eddd0" in namespace "emptydir-5648" to be "success or failure"
Nov 23 00:00:46.060: INFO: Pod "pod-8ba369e0-2846-4044-944c-68ec648eddd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.581204ms
Nov 23 00:00:48.065: INFO: Pod "pod-8ba369e0-2846-4044-944c-68ec648eddd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009126506s
Nov 23 00:00:50.069: INFO: Pod "pod-8ba369e0-2846-4044-944c-68ec648eddd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013152188s
STEP: Saw pod success
Nov 23 00:00:50.069: INFO: Pod "pod-8ba369e0-2846-4044-944c-68ec648eddd0" satisfied condition "success or failure"
Nov 23 00:00:50.071: INFO: Trying to get logs from node iruya-worker2 pod pod-8ba369e0-2846-4044-944c-68ec648eddd0 container test-container: 
STEP: delete the pod
Nov 23 00:00:50.094: INFO: Waiting for pod pod-8ba369e0-2846-4044-944c-68ec648eddd0 to disappear
Nov 23 00:00:50.104: INFO: Pod pod-8ba369e0-2846-4044-944c-68ec648eddd0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 23 00:00:50.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5648" for this suite.
Nov 23 00:00:56.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 23 00:00:56.209: INFO: namespace emptydir-5648 deletion completed in 6.101542359s

• [SLOW TEST:10.214 seconds]
[sig-storage] EmptyDir volumes
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Nov 23 00:00:56.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Nov 23 00:00:56.322: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a47eb4ea-4b32-453c-8a73-e59263ed7b8b" in namespace "projected-3894" to be "success or failure"
Nov 23 00:00:56.327: INFO: Pod "downwardapi-volume-a47eb4ea-4b32-453c-8a73-e59263ed7b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.059511ms
Nov 23 00:00:58.330: INFO: Pod "downwardapi-volume-a47eb4ea-4b32-453c-8a73-e59263ed7b8b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008773923s
Nov 23 00:01:00.334: INFO: Pod "downwardapi-volume-a47eb4ea-4b32-453c-8a73-e59263ed7b8b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012767464s
STEP: Saw pod success
Nov 23 00:01:00.335: INFO: Pod "downwardapi-volume-a47eb4ea-4b32-453c-8a73-e59263ed7b8b" satisfied condition "success or failure"
Nov 23 00:01:00.337: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a47eb4ea-4b32-453c-8a73-e59263ed7b8b container client-container: 
STEP: delete the pod
Nov 23 00:01:00.435: INFO: Waiting for pod downwardapi-volume-a47eb4ea-4b32-453c-8a73-e59263ed7b8b to disappear
Nov 23 00:01:00.440: INFO: Pod downwardapi-volume-a47eb4ea-4b32-453c-8a73-e59263ed7b8b no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Nov 23 00:01:00.440: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3894" for this suite.
Nov 23 00:01:06.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Nov 23 00:01:06.532: INFO: namespace projected-3894 deletion completed in 6.087745296s

• [SLOW TEST:10.322 seconds]
[sig-storage] Projected downwardAPI
/workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /workspace/anago-v1.15.12-beta.0.35+d69e6d58f41274/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSNov 23 00:01:06.532: INFO: Running AfterSuite actions on all nodes
Nov 23 00:01:06.532: INFO: Running AfterSuite actions on node 1
Nov 23 00:01:06.532: INFO: Skipping dumping logs from cluster

Ran 215 of 4413 Specs in 6224.405 seconds
SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4198 Skipped
PASS