I0212 12:56:07.009209 8 e2e.go:243] Starting e2e run "f0b01e30-7752-4010-bc41-0bee554ca11a" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581512165 - Will randomize all specs Will run 215 of 4412 specs Feb 12 12:56:07.298: INFO: >>> kubeConfig: /root/.kube/config Feb 12 12:56:07.301: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 12 12:56:07.323: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 12 12:56:07.348: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 12 12:56:07.348: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 12 12:56:07.348: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 12 12:56:07.359: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 12 12:56:07.359: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 12 12:56:07.359: INFO: e2e test version: v1.15.7 Feb 12 12:56:07.361: INFO: kube-apiserver version: v1.15.1 SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 12:56:07.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test Feb 12 12:56:07.528: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 12:56:19.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2937" for this suite. Feb 12 12:56:25.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 12:56:25.970: INFO: namespace kubelet-test-2937 deletion completed in 6.180795928s • [SLOW TEST:18.609 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 12:56:25.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 12 12:56:26.108: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/:
alternatives.log alternatives.l... (200; 19.357002ms) Feb 12 12:56:26.113: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 5.04235ms) Feb 12 12:56:26.118: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 4.682522ms) Feb 12 12:56:26.121: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 3.332704ms) Feb 12 12:56:26.124: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 3.107038ms) Feb 12 12:56:26.128: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 3.08284ms) Feb 12 12:56:26.135: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 7.535552ms) Feb 12 12:56:26.232: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 96.481625ms) Feb 12 12:56:26.237: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 5.745069ms) Feb 12 12:56:26.244: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 6.852896ms) Feb 12 12:56:26.250: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 5.697382ms) Feb 12 12:56:26.256: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 5.905911ms) Feb 12 12:56:26.262: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 6.364651ms) Feb 12 12:56:26.268: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 5.291123ms) Feb 12 12:56:26.273: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 5.110978ms) Feb 12 12:56:26.276: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 3.355827ms) Feb 12 12:56:26.280: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 3.437865ms) Feb 12 12:56:26.283: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 3.092157ms) Feb 12 12:56:26.286: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 3.097645ms) Feb 12 12:56:26.289: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/:alternatives.log alternatives.l... (200; 3.028028ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 12:56:26.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4561" for this suite. Feb 12 12:56:32.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 12:56:32.464: INFO: namespace proxy-4561 deletion completed in 6.171559031s • [SLOW TEST:6.494 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 12:56:32.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 12 12:56:32.608: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f" in namespace "downward-api-2638" to be "success or failure" Feb 12 12:56:32.619: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.501246ms Feb 12 12:56:34.630: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022197113s Feb 12 12:56:36.647: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038459326s Feb 12 12:56:38.654: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045425959s Feb 12 12:56:40.671: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062594525s Feb 12 12:56:42.701: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Pending", Reason="", readiness=false. Elapsed: 10.093144109s Feb 12 12:56:44.711: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.102907908s Feb 12 12:56:46.730: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.122269633s STEP: Saw pod success Feb 12 12:56:46.731: INFO: Pod "downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f" satisfied condition "success or failure" Feb 12 12:56:46.738: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f container client-container:STEP: delete the pod Feb 12 12:56:47.008: INFO: Waiting for pod downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f to disappear Feb 12 12:56:47.019: INFO: Pod downwardapi-volume-1f08911b-9ba3-49fe-a0e2-fb60f3c4019f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 12:56:47.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2638" for this suite. Feb 12 12:56:53.048: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 12:56:53.169: INFO: namespace downward-api-2638 deletion completed in 6.141681664s • [SLOW TEST:20.703 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 12:56:53.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-bae0ce5b-93bd-403e-a92a-09ef4dcd38e0 STEP: Creating a pod to test consume configMaps Feb 12 12:56:53.323: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50" in namespace "projected-1947" to be "success or failure" Feb 12 12:56:53.377: INFO: Pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50": Phase="Pending", Reason="", readiness=false. Elapsed: 54.189089ms Feb 12 12:56:55.387: INFO: Pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064097578s Feb 12 12:56:57.402: INFO: Pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079065692s Feb 12 12:56:59.414: INFO: Pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090711881s Feb 12 12:57:01.425: INFO: Pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50": Phase="Pending", Reason="", readiness=false. Elapsed: 8.10250725s Feb 12 12:57:03.441: INFO: Pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.118345593s STEP: Saw pod success Feb 12 12:57:03.441: INFO: Pod "pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50" satisfied condition "success or failure" Feb 12 12:57:03.451: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50 container projected-configmap-volume-test: STEP: delete the pod Feb 12 12:57:03.515: INFO: Waiting for pod pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50 to disappear Feb 12 12:57:03.570: INFO: Pod pod-projected-configmaps-30ada4c0-eadb-4965-a891-4af03f80dd50 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 12:57:03.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1947" for this suite. Feb 12 12:57:09.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 12:57:09.902: INFO: namespace projected-1947 deletion completed in 6.320821694s • [SLOW TEST:16.733 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 12:57:09.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-6e76ab71-6e73-4890-af83-b1a4f516851d STEP: Creating a pod to test consume configMaps Feb 12 12:57:10.182: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d" in namespace "projected-3709" to be "success or failure" Feb 12 12:57:10.187: INFO: Pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d": Phase="Pending", Reason="", readiness=false. Elapsed: 5.400259ms Feb 12 12:57:12.602: INFO: Pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.42020374s Feb 12 12:57:14.616: INFO: Pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.434690655s Feb 12 12:57:16.630: INFO: Pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.448647717s Feb 12 12:57:18.647: INFO: Pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.46490558s Feb 12 12:57:20.665: INFO: Pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.482834569s STEP: Saw pod success Feb 12 12:57:20.665: INFO: Pod "pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d" satisfied condition "success or failure" Feb 12 12:57:20.671: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d container projected-configmap-volume-test: STEP: delete the pod Feb 12 12:57:20.770: INFO: Waiting for pod pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d to disappear Feb 12 12:57:20.776: INFO: Pod pod-projected-configmaps-64ae15f7-bcba-4875-b96f-0645a604369d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 12:57:20.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3709" for this suite. Feb 12 12:57:26.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 12:57:27.048: INFO: namespace projected-3709 deletion completed in 6.264966202s • [SLOW TEST:17.146 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 12:57:27.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 12:57:32.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5667" for this suite. Feb 12 12:57:38.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 12:57:38.953: INFO: namespace watch-5667 deletion completed in 6.235972341s • [SLOW TEST:11.904 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 12:57:38.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-3727 I0212 12:57:39.394407 8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3727, replica count: 1 I0212 12:57:40.445122 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 12:57:41.445531 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 12:57:42.445911 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 12:57:43.446646 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 12:57:44.447103 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 12:57:45.447572 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 12:57:46.448110 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 12:57:47.448794 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 12:57:48.449390 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 12:57:49.449983 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 12:57:50.450458 8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 12 12:57:50.614: INFO: Created: latency-svc-cfp82 Feb 12 12:57:50.627: INFO: Got endpoints: latency-svc-cfp82 [76.422559ms] Feb 12 12:57:50.724: INFO: Created: latency-svc-jwlbl Feb 12 12:57:50.736: INFO: Got endpoints: latency-svc-jwlbl [107.254242ms] Feb 12 12:57:50.800: INFO: Created: latency-svc-v4lhh Feb 12 12:57:50.816: INFO: Got endpoints: latency-svc-v4lhh [186.917046ms] Feb 12 12:57:50.939: INFO: Created: latency-svc-9jhcf Feb 12 12:57:50.970: INFO: Got endpoints: latency-svc-9jhcf [342.482686ms] Feb 12 12:57:51.009: INFO: Created: latency-svc-ljgjj Feb 12 12:57:51.085: INFO: Got endpoints: latency-svc-ljgjj [456.264209ms] Feb 12 12:57:51.124: INFO: Created: latency-svc-5dnf7 Feb 12 12:57:51.143: INFO: Got endpoints: latency-svc-5dnf7 [514.344421ms] Feb 12 12:57:51.263: INFO: Created: latency-svc-x4gm7 Feb 12 12:57:51.277: INFO: Got endpoints: latency-svc-x4gm7 [647.638257ms] Feb 12 12:57:51.344: INFO: Created: latency-svc-q8rvh Feb 12 12:57:51.356: INFO: Got endpoints: latency-svc-q8rvh [727.133066ms] Feb 12 12:57:51.443: INFO: Created: latency-svc-zgpm5 Feb 12 12:57:51.449: INFO: Got endpoints: latency-svc-zgpm5 [820.048702ms] Feb 12 12:57:51.501: INFO: Created: latency-svc-mprkd Feb 12 12:57:51.591: INFO: Got endpoints: latency-svc-mprkd [961.460171ms] Feb 12 12:57:51.633: INFO: Created: latency-svc-v26nl Feb 12 12:57:51.649: INFO: Got endpoints: latency-svc-v26nl [1.019338719s] Feb 12 12:57:51.743: INFO: Created: latency-svc-wzhv7 Feb 12 12:57:51.768: INFO: Got endpoints: latency-svc-wzhv7 [1.139051934s] Feb 12 12:57:51.801: INFO: Created: latency-svc-2bhpz Feb 12 12:57:51.824: INFO: Got endpoints: latency-svc-2bhpz [1.195394752s] Feb 12 12:57:51.904: INFO: Created: latency-svc-4s7mz Feb 12 12:57:51.946: INFO: Got endpoints: latency-svc-4s7mz [1.316993419s] Feb 12 12:57:51.948: INFO: Created: latency-svc-tfpvr Feb 12 12:57:51.957: INFO: Got endpoints: latency-svc-tfpvr [1.328001728s] Feb 12 12:57:52.081: INFO: Created: latency-svc-fkmcp Feb 12 12:57:52.093: INFO: Got endpoints: latency-svc-fkmcp [1.464380844s] Feb 12 12:57:52.178: INFO: Created: latency-svc-jkh9f Feb 12 12:57:52.248: INFO: Got endpoints: latency-svc-jkh9f [1.511306473s] Feb 12 12:57:52.284: INFO: Created: latency-svc-t6vq9 Feb 12 12:57:52.328: INFO: Got endpoints: latency-svc-t6vq9 [1.512350444s] Feb 12 12:57:52.437: INFO: Created: latency-svc-vg5ct Feb 12 12:57:52.452: INFO: Got endpoints: latency-svc-vg5ct [1.481611326s] Feb 12 12:57:52.581: INFO: Created: latency-svc-dndt7 Feb 12 12:57:52.583: INFO: Got endpoints: latency-svc-dndt7 [1.496938012s] Feb 12 12:57:52.633: INFO: Created: latency-svc-k8576 Feb 12 12:57:52.638: INFO: Got endpoints: latency-svc-k8576 [1.494410083s] Feb 12 12:57:52.768: INFO: Created: latency-svc-6nkmq Feb 12 12:57:52.845: INFO: Got endpoints: latency-svc-6nkmq [1.568048194s] Feb 12 12:57:52.864: INFO: Created: latency-svc-hsflh Feb 12 12:57:52.946: INFO: Got endpoints: latency-svc-hsflh [1.589368977s] Feb 12 12:57:53.002: INFO: Created: latency-svc-vgmxd Feb 12 12:57:53.546: INFO: Got endpoints: latency-svc-vgmxd [2.096629299s] Feb 12 12:57:53.556: INFO: Created: latency-svc-bqf7g Feb 12 12:57:53.797: INFO: Got endpoints: latency-svc-bqf7g [2.205813394s] Feb 12 12:57:53.850: INFO: Created: latency-svc-spqbx Feb 12 12:57:53.868: INFO: Got endpoints: latency-svc-spqbx [2.218839409s] Feb 12 12:57:53.996: INFO: Created: latency-svc-b2tvp Feb 12 12:57:53.996: INFO: Got endpoints: latency-svc-b2tvp [2.227688095s] Feb 12 12:57:54.061: INFO: Created: latency-svc-jbn5f Feb 12 12:57:54.103: INFO: Got endpoints: latency-svc-jbn5f [2.278488149s] Feb 12 12:57:54.135: INFO: Created: latency-svc-mhthb Feb 12 12:57:54.149: INFO: Got endpoints: latency-svc-mhthb [2.202841052s] Feb 12 12:57:54.189: INFO: Created: latency-svc-zlqbd Feb 12 12:57:54.245: INFO: Got endpoints: latency-svc-zlqbd [2.288423433s] Feb 12 12:57:54.298: INFO: Created: latency-svc-7kfbz Feb 12 12:57:54.316: INFO: Got endpoints: latency-svc-7kfbz [2.22240629s] Feb 12 12:57:54.428: INFO: Created: latency-svc-m9tk2 Feb 12 12:57:54.461: INFO: Got endpoints: latency-svc-m9tk2 [2.212579202s] Feb 12 12:57:54.522: INFO: Created: latency-svc-65djb Feb 12 12:57:54.617: INFO: Got endpoints: latency-svc-65djb [2.288153156s] Feb 12 12:57:54.620: INFO: Created: latency-svc-87ztl Feb 12 12:57:54.634: INFO: Got endpoints: latency-svc-87ztl [2.181247315s] Feb 12 12:57:54.696: INFO: Created: latency-svc-t59f2 Feb 12 12:57:54.772: INFO: Got endpoints: latency-svc-t59f2 [2.18919345s] Feb 12 12:57:54.835: INFO: Created: latency-svc-j2b9c Feb 12 12:57:54.848: INFO: Got endpoints: latency-svc-j2b9c [2.209731612s] Feb 12 12:57:55.007: INFO: Created: latency-svc-7xfgf Feb 12 12:57:55.016: INFO: Got endpoints: latency-svc-7xfgf [2.171001984s] Feb 12 12:57:55.073: INFO: Created: latency-svc-58ggp Feb 12 12:57:55.176: INFO: Got endpoints: latency-svc-58ggp [2.230116366s] Feb 12 12:57:55.239: INFO: Created: latency-svc-g9w74 Feb 12 12:57:55.242: INFO: Got endpoints: latency-svc-g9w74 [1.695359518s] Feb 12 12:57:55.396: INFO: Created: latency-svc-tfk95 Feb 12 12:57:55.440: INFO: Got endpoints: latency-svc-tfk95 [1.64260383s] Feb 12 12:57:55.652: INFO: Created: latency-svc-fm659 Feb 12 12:57:55.661: INFO: Got endpoints: latency-svc-fm659 [1.792774288s] Feb 12 12:57:55.727: INFO: Created: latency-svc-k6wj4 Feb 12 12:57:55.853: INFO: Got endpoints: latency-svc-k6wj4 [1.856335496s] Feb 12 12:57:55.885: INFO: Created: latency-svc-42p9m Feb 12 12:57:55.893: INFO: Got endpoints: latency-svc-42p9m [1.789365472s] Feb 12 12:57:56.067: INFO: Created: latency-svc-ltl9j Feb 12 12:57:56.114: INFO: Got endpoints: latency-svc-ltl9j [1.964521967s] Feb 12 12:57:56.248: INFO: Created: latency-svc-pb8rk Feb 12 12:57:56.267: INFO: Got endpoints: latency-svc-pb8rk [2.021521599s] Feb 12 12:57:56.328: INFO: Created: latency-svc-8xjts Feb 12 12:57:56.330: INFO: Got endpoints: latency-svc-8xjts [2.013710867s] Feb 12 12:57:56.581: INFO: Created: latency-svc-fvjsv Feb 12 12:57:56.607: INFO: Got endpoints: latency-svc-fvjsv [2.145720761s] Feb 12 12:57:56.777: INFO: Created: latency-svc-xtxdk Feb 12 12:57:56.805: INFO: Got endpoints: latency-svc-xtxdk [2.188079618s] Feb 12 12:57:56.952: INFO: Created: latency-svc-p7vmk Feb 12 12:57:56.964: INFO: Got endpoints: latency-svc-p7vmk [2.329915943s] Feb 12 12:57:57.019: INFO: Created: latency-svc-ng6l7 Feb 12 12:57:57.026: INFO: Got endpoints: latency-svc-ng6l7 [2.254077547s] Feb 12 12:57:57.265: INFO: Created: latency-svc-q7md6 Feb 12 12:57:57.277: INFO: Got endpoints: latency-svc-q7md6 [2.429018962s] Feb 12 12:57:57.471: INFO: Created: latency-svc-xx76k Feb 12 12:57:57.479: INFO: Got endpoints: latency-svc-xx76k [2.462815707s] Feb 12 12:57:57.733: INFO: Created: latency-svc-zbd8w Feb 12 12:57:57.905: INFO: Created: latency-svc-hjdvn Feb 12 12:57:57.905: INFO: Got endpoints: latency-svc-zbd8w [2.728484083s] Feb 12 12:57:57.936: INFO: Got endpoints: latency-svc-hjdvn [2.694200612s] Feb 12 12:57:58.185: INFO: Created: latency-svc-hm84x Feb 12 12:57:58.280: INFO: Got endpoints: latency-svc-hm84x [2.839974322s] Feb 12 12:57:58.299: INFO: Created: latency-svc-pw8c4 Feb 12 12:57:58.428: INFO: Got endpoints: latency-svc-pw8c4 [2.767148796s] Feb 12 12:57:58.522: INFO: Created: latency-svc-8n6jz Feb 12 12:57:58.595: INFO: Got endpoints: latency-svc-8n6jz [2.741495008s] Feb 12 12:57:58.666: INFO: Created: latency-svc-kl89g Feb 12 12:57:58.668: INFO: Got endpoints: latency-svc-kl89g [2.775155909s] Feb 12 12:57:58.778: INFO: Created: latency-svc-ql79p Feb 12 12:57:58.790: INFO: Got endpoints: latency-svc-ql79p [2.675390516s] Feb 12 12:57:58.839: INFO: Created: latency-svc-2q6cj Feb 12 12:57:58.869: INFO: Got endpoints: latency-svc-2q6cj [2.601577063s] Feb 12 12:57:59.036: INFO: Created: latency-svc-xn7gq Feb 12 12:57:59.071: INFO: Got endpoints: latency-svc-xn7gq [2.740924015s] Feb 12 12:57:59.409: INFO: Created: latency-svc-gpwdh Feb 12 12:57:59.421: INFO: Got endpoints: latency-svc-gpwdh [2.813629219s] Feb 12 12:58:00.064: INFO: Created: latency-svc-cv5fw Feb 12 12:58:00.080: INFO: Got endpoints: latency-svc-cv5fw [3.274944986s] Feb 12 12:58:00.319: INFO: Created: latency-svc-5jzjq Feb 12 12:58:00.319: INFO: Got endpoints: latency-svc-5jzjq [3.355460069s] Feb 12 12:58:00.387: INFO: Created: latency-svc-gl59t Feb 12 12:58:00.457: INFO: Got endpoints: latency-svc-gl59t [3.430396006s] Feb 12 12:58:00.523: INFO: Created: latency-svc-tx26g Feb 12 12:58:00.533: INFO: Got endpoints: latency-svc-tx26g [3.255516931s] Feb 12 12:58:00.676: INFO: Created: latency-svc-glpvw Feb 12 12:58:00.685: INFO: Got endpoints: latency-svc-glpvw [3.204965262s] Feb 12 12:58:00.810: INFO: Created: latency-svc-9s9hw Feb 12 12:58:00.833: INFO: Got endpoints: latency-svc-9s9hw [299.530133ms] Feb 12 12:58:00.887: INFO: Created: latency-svc-dnx4c Feb 12 12:58:00.903: INFO: Got endpoints: latency-svc-dnx4c [2.99777492s] Feb 12 12:58:01.012: INFO: Created: latency-svc-nvn5b Feb 12 12:58:01.024: INFO: Got endpoints: latency-svc-nvn5b [3.087634318s] Feb 12 12:58:01.130: INFO: Created: latency-svc-fcrwv Feb 12 12:58:01.199: INFO: Got endpoints: latency-svc-fcrwv [2.917675389s] Feb 12 12:58:01.269: INFO: Created: latency-svc-6q4bc Feb 12 12:58:01.283: INFO: Got endpoints: latency-svc-6q4bc [2.854930006s] Feb 12 12:58:01.346: INFO: Created: latency-svc-v4kgb Feb 12 12:58:01.360: INFO: Got endpoints: latency-svc-v4kgb [2.764499644s] Feb 12 12:58:01.453: INFO: Created: latency-svc-bxtlf Feb 12 12:58:01.460: INFO: Got endpoints: latency-svc-bxtlf [2.791959869s] Feb 12 12:58:01.507: INFO: Created: latency-svc-k2q6z Feb 12 12:58:01.615: INFO: Created: latency-svc-mz9k2 Feb 12 12:58:01.623: INFO: Got endpoints: latency-svc-k2q6z [2.833411381s] Feb 12 12:58:01.629: INFO: Got endpoints: latency-svc-mz9k2 [2.759296255s] Feb 12 12:58:01.713: INFO: Created: latency-svc-nmszn Feb 12 12:58:01.797: INFO: Got endpoints: latency-svc-nmszn [2.72569022s] Feb 12 12:58:01.858: INFO: Created: latency-svc-6qrm2 Feb 12 12:58:01.880: INFO: Got endpoints: latency-svc-6qrm2 [2.458917309s] Feb 12 12:58:01.966: INFO: Created: latency-svc-zmw78 Feb 12 12:58:01.977: INFO: Got endpoints: latency-svc-zmw78 [1.895992348s] Feb 12 12:58:02.051: INFO: Created: latency-svc-mjd9j Feb 12 12:58:02.078: INFO: Got endpoints: latency-svc-mjd9j [1.75843177s] Feb 12 12:58:02.252: INFO: Created: latency-svc-kr86x Feb 12 12:58:02.268: INFO: Got endpoints: latency-svc-kr86x [1.81108922s] Feb 12 12:58:02.386: INFO: Created: latency-svc-9gwls Feb 12 12:58:02.408: INFO: Got endpoints: latency-svc-9gwls [1.722929015s] Feb 12 12:58:02.606: INFO: Created: latency-svc-lqdbx Feb 12 12:58:02.632: INFO: Got endpoints: latency-svc-lqdbx [1.798385235s] Feb 12 12:58:02.664: INFO: Created: latency-svc-t8bg7 Feb 12 12:58:02.687: INFO: Got endpoints: latency-svc-t8bg7 [1.78294941s] Feb 12 12:58:02.808: INFO: Created: latency-svc-h75rv Feb 12 12:58:02.880: INFO: Got endpoints: latency-svc-h75rv [1.855740152s] Feb 12 12:58:02.884: INFO: Created: latency-svc-lwdhv Feb 12 12:58:03.077: INFO: Got endpoints: latency-svc-lwdhv [1.877426937s] Feb 12 12:58:03.105: INFO: Created: latency-svc-286dx Feb 12 12:58:03.126: INFO: Got endpoints: latency-svc-286dx [1.842613153s] Feb 12 12:58:03.302: INFO: Created: latency-svc-dbnhp Feb 12 12:58:03.312: INFO: Got endpoints: latency-svc-dbnhp [1.951738718s] Feb 12 12:58:03.393: INFO: Created: latency-svc-s68tr Feb 12 12:58:04.008: INFO: Got endpoints: latency-svc-s68tr [2.547024511s] Feb 12 12:58:04.046: INFO: Created: latency-svc-drmjw Feb 12 12:58:04.102: INFO: Got endpoints: latency-svc-drmjw [2.478169095s] Feb 12 12:58:04.322: INFO: Created: latency-svc-fxrqp Feb 12 12:58:04.357: INFO: Got endpoints: latency-svc-fxrqp [2.727601173s] Feb 12 12:58:04.534: INFO: Created: latency-svc-v2ldp Feb 12 12:58:04.562: INFO: Got endpoints: latency-svc-v2ldp [2.764621512s] Feb 12 12:58:04.636: INFO: Created: latency-svc-7b7kl Feb 12 12:58:04.762: INFO: Got endpoints: latency-svc-7b7kl [2.881470018s] Feb 12 12:58:04.869: INFO: Created: latency-svc-8lvkt Feb 12 12:58:04.999: INFO: Got endpoints: latency-svc-8lvkt [3.022221588s] Feb 12 12:58:05.058: INFO: Created: latency-svc-lsw2c Feb 12 12:58:05.073: INFO: Got endpoints: latency-svc-lsw2c [2.995040707s] Feb 12 12:58:05.294: INFO: Created: latency-svc-wpzzd Feb 12 12:58:05.309: INFO: Got endpoints: latency-svc-wpzzd [3.040344296s] Feb 12 12:58:05.597: INFO: Created: latency-svc-pdw96 Feb 12 12:58:05.619: INFO: Got endpoints: latency-svc-pdw96 [3.210685386s] Feb 12 12:58:05.687: INFO: Created: latency-svc-jq5qb Feb 12 12:58:05.792: INFO: Got endpoints: latency-svc-jq5qb [3.159066628s] Feb 12 12:58:05.987: INFO: Created: latency-svc-mfq6q Feb 12 12:58:06.008: INFO: Got endpoints: latency-svc-mfq6q [3.321554223s] Feb 12 12:58:06.254: INFO: Created: latency-svc-qwwbz Feb 12 12:58:06.282: INFO: Got endpoints: latency-svc-qwwbz [3.401337344s] Feb 12 12:58:06.356: INFO: Created: latency-svc-ndq4r Feb 12 12:58:06.534: INFO: Got endpoints: latency-svc-ndq4r [3.457271152s] Feb 12 12:58:06.570: INFO: Created: latency-svc-htbg6 Feb 12 12:58:06.592: INFO: Got endpoints: latency-svc-htbg6 [3.465044051s] Feb 12 12:58:06.811: INFO: Created: latency-svc-dqwhx Feb 12 12:58:06.833: INFO: Got endpoints: latency-svc-dqwhx [3.520742862s] Feb 12 12:58:06.986: INFO: Created: latency-svc-rpjc5 Feb 12 12:58:06.995: INFO: Got endpoints: latency-svc-rpjc5 [2.986766139s] Feb 12 12:58:07.216: INFO: Created: latency-svc-vntjb Feb 12 12:58:07.238: INFO: Got endpoints: latency-svc-vntjb [3.135504663s] Feb 12 12:58:07.307: INFO: Created: latency-svc-2kjds Feb 12 12:58:07.431: INFO: Got endpoints: latency-svc-2kjds [3.073964422s] Feb 12 12:58:07.461: INFO: Created: latency-svc-72zh7 Feb 12 12:58:07.474: INFO: Got endpoints: latency-svc-72zh7 [2.911663547s] Feb 12 12:58:07.724: INFO: Created: latency-svc-6pbbc Feb 12 12:58:07.766: INFO: Got endpoints: latency-svc-6pbbc [3.002953188s] Feb 12 12:58:07.802: INFO: Created: latency-svc-hrzw5 Feb 12 12:58:07.813: INFO: Got endpoints: latency-svc-hrzw5 [2.814291259s] Feb 12 12:58:07.998: INFO: Created: latency-svc-n4x26 Feb 12 12:58:08.007: INFO: Got endpoints: latency-svc-n4x26 [2.934031002s] Feb 12 12:58:08.220: INFO: Created: latency-svc-7b2qv Feb 12 12:58:08.301: INFO: Got endpoints: latency-svc-7b2qv [2.992088861s] Feb 12 12:58:08.306: INFO: Created: latency-svc-v7cc7 Feb 12 12:58:08.395: INFO: Got endpoints: latency-svc-v7cc7 [2.776641248s] Feb 12 12:58:08.422: INFO: Created: latency-svc-ls88b Feb 12 12:58:08.439: INFO: Got endpoints: latency-svc-ls88b [2.646892441s] Feb 12 12:58:08.583: INFO: Created: latency-svc-4dqj9 Feb 12 12:58:08.595: INFO: Got endpoints: latency-svc-4dqj9 [2.586300733s] Feb 12 12:58:08.661: INFO: Created: latency-svc-wktcp Feb 12 12:58:08.972: INFO: Got endpoints: latency-svc-wktcp [2.689579942s] Feb 12 12:58:08.978: INFO: Created: latency-svc-shqq4 Feb 12 12:58:08.996: INFO: Got endpoints: latency-svc-shqq4 [2.461622883s] Feb 12 12:58:09.297: INFO: Created: latency-svc-r7v8l Feb 12 12:58:09.312: INFO: Got endpoints: latency-svc-r7v8l [2.719961028s] Feb 12 12:58:09.543: INFO: Created: latency-svc-b4pzh Feb 12 12:58:09.562: INFO: Got endpoints: latency-svc-b4pzh [2.728954234s] Feb 12 12:58:09.642: INFO: Created: latency-svc-hwjjp Feb 12 12:58:09.730: INFO: Got endpoints: latency-svc-hwjjp [2.735001641s] Feb 12 12:58:09.786: INFO: Created: latency-svc-tpcbs Feb 12 12:58:09.797: INFO: Got endpoints: latency-svc-tpcbs [2.558829411s] Feb 12 12:58:10.028: INFO: Created: latency-svc-7bfxw Feb 12 12:58:10.035: INFO: Got endpoints: latency-svc-7bfxw [2.603922275s] Feb 12 12:58:10.111: INFO: Created: latency-svc-xxc5q Feb 12 12:58:10.121: INFO: Got endpoints: latency-svc-xxc5q [2.647174251s] Feb 12 12:58:10.292: INFO: Created: latency-svc-5jgh5 Feb 12 12:58:10.300: INFO: Got endpoints: latency-svc-5jgh5 [2.533548097s] Feb 12 12:58:10.493: INFO: Created: latency-svc-pl999 Feb 12 12:58:10.521: INFO: Got endpoints: latency-svc-pl999 [2.707389997s] Feb 12 12:58:10.722: INFO: Created: latency-svc-fpxtk Feb 12 12:58:10.776: INFO: Got endpoints: latency-svc-fpxtk [2.768109515s] Feb 12 12:58:11.124: INFO: Created: latency-svc-gkxkh Feb 12 12:58:11.345: INFO: Got endpoints: latency-svc-gkxkh [3.042870723s] Feb 12 12:58:11.348: INFO: Created: latency-svc-8s4j6 Feb 12 12:58:11.355: INFO: Got endpoints: latency-svc-8s4j6 [2.958836261s] Feb 12 12:58:11.672: INFO: Created: latency-svc-5b8fl Feb 12 12:58:11.677: INFO: Got endpoints: latency-svc-5b8fl [3.237931192s] Feb 12 12:58:11.917: INFO: Created: latency-svc-mr6q8 Feb 12 12:58:11.931: INFO: Got endpoints: latency-svc-mr6q8 [3.335301705s] Feb 12 12:58:12.256: INFO: Created: latency-svc-f6zxt Feb 12 12:58:12.265: INFO: Got endpoints: latency-svc-f6zxt [3.292509783s] Feb 12 12:58:12.540: INFO: Created: latency-svc-l5fw8 Feb 12 12:58:12.540: INFO: Got endpoints: latency-svc-l5fw8 [3.543795418s] Feb 12 12:58:12.813: INFO: Created: latency-svc-rzhxh Feb 12 12:58:12.861: INFO: Got endpoints: latency-svc-rzhxh [3.549029261s] Feb 12 12:58:13.037: INFO: Created: latency-svc-gmkn4 Feb 12 12:58:13.043: INFO: Got endpoints: latency-svc-gmkn4 [3.480604372s] Feb 12 12:58:13.261: INFO: Created: latency-svc-t67zb Feb 12 12:58:13.518: INFO: Got endpoints: latency-svc-t67zb [3.787465086s] Feb 12 12:58:13.531: INFO: Created: latency-svc-j7hhk Feb 12 12:58:13.533: INFO: Got endpoints: latency-svc-j7hhk [3.736563896s] Feb 12 12:58:13.594: INFO: Created: latency-svc-nbrdl Feb 12 12:58:13.598: INFO: Got endpoints: latency-svc-nbrdl [3.562585304s] Feb 12 12:58:13.757: INFO: Created: latency-svc-2bs7x Feb 12 12:58:13.770: INFO: Got endpoints: latency-svc-2bs7x [3.64889599s] Feb 12 12:58:13.982: INFO: Created: latency-svc-zqkdx Feb 12 12:58:13.990: INFO: Got endpoints: latency-svc-zqkdx [3.689726697s] Feb 12 12:58:14.180: INFO: Created: latency-svc-qlcmr Feb 12 12:58:14.187: INFO: Got endpoints: latency-svc-qlcmr [3.665374407s] Feb 12 12:58:14.377: INFO: Created: latency-svc-ff679 Feb 12 12:58:14.399: INFO: Got endpoints: latency-svc-ff679 [3.621893738s] Feb 12 12:58:14.462: INFO: Created: latency-svc-47xdc Feb 12 12:58:14.464: INFO: Got endpoints: latency-svc-47xdc [3.119036029s] Feb 12 12:58:14.599: INFO: Created: latency-svc-vhplz Feb 12 12:58:14.745: INFO: Got endpoints: latency-svc-vhplz [3.390387449s] Feb 12 12:58:14.749: INFO: Created: latency-svc-6rl48 Feb 12 12:58:14.758: INFO: Got endpoints: latency-svc-6rl48 [3.081159223s] Feb 12 12:58:14.913: INFO: Created: latency-svc-fqv68 Feb 12 12:58:14.930: INFO: Got endpoints: latency-svc-fqv68 [2.998880707s] Feb 12 12:58:14.997: INFO: Created: latency-svc-njjxx Feb 12 12:58:15.128: INFO: Got endpoints: latency-svc-njjxx [2.862811311s] Feb 12 12:58:15.167: INFO: Created: latency-svc-zmwvn Feb 12 12:58:15.209: INFO: Got endpoints: latency-svc-zmwvn [2.668833045s] Feb 12 12:58:15.227: INFO: Created: latency-svc-ds8fz Feb 12 12:58:15.231: INFO: Got endpoints: latency-svc-ds8fz [2.369552608s] Feb 12 12:58:15.335: INFO: Created: latency-svc-cjfms Feb 12 12:58:15.339: INFO: Got endpoints: latency-svc-cjfms [2.296326382s] Feb 12 12:58:15.530: INFO: Created: latency-svc-t4jlx Feb 12 12:58:15.547: INFO: Got endpoints: latency-svc-t4jlx [2.028056484s] Feb 12 12:58:15.897: INFO: Created: latency-svc-f5vt4 Feb 12 12:58:15.909: INFO: Got endpoints: latency-svc-f5vt4 [2.376024468s] Feb 12 12:58:16.143: INFO: Created: latency-svc-sk9k7 Feb 12 12:58:16.160: INFO: Got endpoints: latency-svc-sk9k7 [2.56206083s] Feb 12 12:58:16.239: INFO: Created: latency-svc-6xvx6 Feb 12 12:58:16.247: INFO: Got endpoints: latency-svc-6xvx6 [2.476955818s] Feb 12 12:58:16.381: INFO: Created: latency-svc-vwcc8 Feb 12 12:58:16.389: INFO: Got endpoints: latency-svc-vwcc8 [2.399621233s] Feb 12 12:58:16.586: INFO: Created: latency-svc-2xk44 Feb 12 12:58:16.667: INFO: Created: latency-svc-gtlqs Feb 12 12:58:16.667: INFO: Got endpoints: latency-svc-2xk44 [2.480074431s] Feb 12 12:58:16.674: INFO: Got endpoints: latency-svc-gtlqs [2.275557777s] Feb 12 12:58:16.877: INFO: Created: latency-svc-hv5ls Feb 12 12:58:16.889: INFO: Got endpoints: latency-svc-hv5ls [2.424706548s] Feb 12 12:58:17.067: INFO: Created: latency-svc-jf78m Feb 12 12:58:17.068: INFO: Got endpoints: latency-svc-jf78m [2.322850475s] Feb 12 12:58:17.147: INFO: Created: latency-svc-4zdcn Feb 12 12:58:17.213: INFO: Got endpoints: latency-svc-4zdcn [2.454248677s] Feb 12 12:58:17.297: INFO: Created: latency-svc-5q6wn Feb 12 12:58:17.384: INFO: Got endpoints: latency-svc-5q6wn [2.453573925s] Feb 12 12:58:17.415: INFO: Created: latency-svc-9jdcn Feb 12 12:58:17.437: INFO: Got endpoints: latency-svc-9jdcn [2.309025662s] Feb 12 12:58:17.598: INFO: Created: latency-svc-w4jf6 Feb 12 12:58:17.632: INFO: Got endpoints: latency-svc-w4jf6 [2.42234501s] Feb 12 12:58:17.678: INFO: Created: latency-svc-gdkrs Feb 12 12:58:17.908: INFO: Got endpoints: latency-svc-gdkrs [2.676807258s] Feb 12 12:58:17.909: INFO: Created: latency-svc-cdj59 Feb 12 12:58:17.921: INFO: Got endpoints: latency-svc-cdj59 [2.582011833s] Feb 12 12:58:18.101: INFO: Created: latency-svc-hhkvj Feb 12 12:58:18.124: INFO: Got endpoints: latency-svc-hhkvj [2.577247314s] Feb 12 12:58:18.282: INFO: Created: latency-svc-pjfn9 Feb 12 12:58:18.287: INFO: Got endpoints: latency-svc-pjfn9 [2.377854777s] Feb 12 12:58:18.359: INFO: Created: latency-svc-cbrkc Feb 12 12:58:18.482: INFO: Got endpoints: latency-svc-cbrkc [2.322344312s] Feb 12 12:58:18.518: INFO: Created: latency-svc-csnqc Feb 12 12:58:18.526: INFO: Got endpoints: latency-svc-csnqc [2.279104446s] Feb 12 12:58:18.726: INFO: Created: latency-svc-gxtfh Feb 12 12:58:18.731: INFO: Got endpoints: latency-svc-gxtfh [2.34093843s] Feb 12 12:58:18.914: INFO: Created: latency-svc-2wq62 Feb 12 12:58:18.923: INFO: Got endpoints: latency-svc-2wq62 [2.255865303s] Feb 12 12:58:19.218: INFO: Created: latency-svc-5mzwl Feb 12 12:58:19.238: INFO: Got endpoints: latency-svc-5mzwl [2.563241133s] Feb 12 12:58:19.565: INFO: Created: latency-svc-r698j Feb 12 12:58:19.674: INFO: Got endpoints: latency-svc-r698j [2.785056548s] Feb 12 12:58:19.713: INFO: Created: latency-svc-w2mcd Feb 12 12:58:19.720: INFO: Got endpoints: latency-svc-w2mcd [2.651203691s] Feb 12 12:58:19.890: INFO: Created: latency-svc-sz8v5 Feb 12 12:58:19.907: INFO: Got endpoints: latency-svc-sz8v5 [2.69363955s] Feb 12 12:58:20.174: INFO: Created: latency-svc-scsws Feb 12 12:58:20.190: INFO: Got endpoints: latency-svc-scsws [2.806270484s] Feb 12 12:58:20.447: INFO: Created: latency-svc-d2w5x Feb 12 12:58:20.457: INFO: Got endpoints: latency-svc-d2w5x [3.020303189s] Feb 12 12:58:20.595: INFO: Created: latency-svc-kn25q Feb 12 12:58:20.611: INFO: Got endpoints: latency-svc-kn25q [2.979345486s] Feb 12 12:58:20.665: INFO: Created: latency-svc-nwqpp Feb 12 12:58:20.669: INFO: Got endpoints: latency-svc-nwqpp [2.759780107s] Feb 12 12:58:20.770: INFO: Created: latency-svc-dx4lt Feb 12 12:58:20.831: INFO: Got endpoints: latency-svc-dx4lt [2.909697766s] Feb 12 12:58:20.834: INFO: Created: latency-svc-ws6hm Feb 12 12:58:20.980: INFO: Got endpoints: latency-svc-ws6hm [2.855247985s] Feb 12 12:58:20.986: INFO: Created: latency-svc-wc4fq Feb 12 12:58:21.004: INFO: Got endpoints: latency-svc-wc4fq [2.715825832s] Feb 12 12:58:21.216: INFO: Created: latency-svc-8xxgq Feb 12 12:58:21.240: INFO: Got endpoints: latency-svc-8xxgq [2.757453985s] Feb 12 12:58:21.314: INFO: Created: latency-svc-428hz Feb 12 12:58:21.423: INFO: Got endpoints: latency-svc-428hz [2.89600138s] Feb 12 12:58:21.516: INFO: Created: latency-svc-tlwxr Feb 12 12:58:21.517: INFO: Got endpoints: latency-svc-tlwxr [2.786157588s] Feb 12 12:58:21.682: INFO: Created: latency-svc-rxkgn Feb 12 12:58:21.691: INFO: Got endpoints: latency-svc-rxkgn [2.768047863s] Feb 12 12:58:21.751: INFO: Created: latency-svc-898sn Feb 12 12:58:21.766: INFO: Got endpoints: latency-svc-898sn [2.527770632s] Feb 12 12:58:21.897: INFO: Created: latency-svc-cvp94 Feb 12 12:58:21.910: INFO: Got endpoints: latency-svc-cvp94 [2.235294214s] Feb 12 12:58:22.184: INFO: Created: latency-svc-jjrc5 Feb 12 12:58:22.190: INFO: Got endpoints: latency-svc-jjrc5 [2.470468766s] Feb 12 12:58:22.285: INFO: Created: latency-svc-645nj Feb 12 12:58:22.380: INFO: Got endpoints: latency-svc-645nj [2.472806348s] Feb 12 12:58:22.485: INFO: Created: latency-svc-b8jjd Feb 12 12:58:22.625: INFO: Got endpoints: latency-svc-b8jjd [2.434208209s] Feb 12 12:58:22.680: INFO: Created: latency-svc-swrmv Feb 12 12:58:22.681: INFO: Got endpoints: latency-svc-swrmv [2.22246439s] Feb 12 12:58:22.823: INFO: Created: latency-svc-6kcdz Feb 12 12:58:22.844: INFO: Got endpoints: latency-svc-6kcdz [2.232318213s] Feb 12 12:58:22.904: INFO: Created: latency-svc-w57kq Feb 12 12:58:23.066: INFO: Got endpoints: latency-svc-w57kq [2.397204081s] Feb 12 12:58:23.159: INFO: Created: latency-svc-w5k5s Feb 12 12:58:23.249: INFO: Got endpoints: latency-svc-w5k5s [2.41691614s] Feb 12 12:58:23.309: INFO: Created: latency-svc-qdfs7 Feb 12 12:58:23.322: INFO: Got endpoints: latency-svc-qdfs7 [2.341976307s] Feb 12 12:58:23.434: INFO: Created: latency-svc-p57hc Feb 12 12:58:23.607: INFO: Got endpoints: latency-svc-p57hc [2.602628955s] Feb 12 12:58:23.610: INFO: Created: latency-svc-9lkrp Feb 12 12:58:23.624: INFO: Got endpoints: latency-svc-9lkrp [2.383486133s] Feb 12 12:58:23.684: INFO: Created: latency-svc-nvmcn Feb 12 12:58:23.757: INFO: Got endpoints: latency-svc-nvmcn [2.334440195s] Feb 12 12:58:23.795: INFO: Created: latency-svc-5qrw4 Feb 12 12:58:23.807: INFO: Got endpoints: latency-svc-5qrw4 [2.290258491s] Feb 12 12:58:23.959: INFO: Created: latency-svc-gsp4n Feb 12 12:58:24.013: INFO: Got endpoints: latency-svc-gsp4n [2.321379419s] Feb 12 12:58:24.014: INFO: Created: latency-svc-7rvn4 Feb 12 12:58:24.023: INFO: Got endpoints: latency-svc-7rvn4 [2.257333723s] Feb 12 12:58:24.550: INFO: Created: latency-svc-9rjk7 Feb 12 12:58:24.565: INFO: Got endpoints: latency-svc-9rjk7 [2.655492392s] Feb 12 12:58:24.566: INFO: Latencies: [107.254242ms 186.917046ms 299.530133ms 342.482686ms 456.264209ms 514.344421ms 647.638257ms 727.133066ms 820.048702ms 961.460171ms 1.019338719s 1.139051934s 1.195394752s 1.316993419s 1.328001728s 1.464380844s 1.481611326s 1.494410083s 1.496938012s 1.511306473s 1.512350444s 1.568048194s 1.589368977s 1.64260383s 1.695359518s 1.722929015s 1.75843177s 1.78294941s 1.789365472s 1.792774288s 1.798385235s 1.81108922s 1.842613153s 1.855740152s 1.856335496s 1.877426937s 1.895992348s 1.951738718s 1.964521967s 2.013710867s 2.021521599s 2.028056484s 2.096629299s 2.145720761s 2.171001984s 2.181247315s 2.188079618s 2.18919345s 2.202841052s 2.205813394s 2.209731612s 2.212579202s 2.218839409s 2.22240629s 2.22246439s 2.227688095s 2.230116366s 2.232318213s 2.235294214s 2.254077547s 2.255865303s 2.257333723s 2.275557777s 2.278488149s 2.279104446s 2.288153156s 2.288423433s 2.290258491s 2.296326382s 2.309025662s 2.321379419s 2.322344312s 2.322850475s 2.329915943s 2.334440195s 2.34093843s 2.341976307s 2.369552608s 2.376024468s 2.377854777s 2.383486133s 2.397204081s 2.399621233s 2.41691614s 2.42234501s 2.424706548s 2.429018962s 2.434208209s 2.453573925s 2.454248677s 2.458917309s 2.461622883s 2.462815707s 2.470468766s 2.472806348s 2.476955818s 2.478169095s 2.480074431s 2.527770632s 2.533548097s 2.547024511s 2.558829411s 2.56206083s 2.563241133s 2.577247314s 2.582011833s 2.586300733s 2.601577063s 2.602628955s 2.603922275s 2.646892441s 2.647174251s 2.651203691s 2.655492392s 2.668833045s 2.675390516s 2.676807258s 2.689579942s 2.69363955s 2.694200612s 2.707389997s 2.715825832s 2.719961028s 2.72569022s 2.727601173s 2.728484083s 2.728954234s 2.735001641s 2.740924015s 2.741495008s 2.757453985s 2.759296255s 2.759780107s 2.764499644s 2.764621512s 2.767148796s 2.768047863s 2.768109515s 2.775155909s 2.776641248s 2.785056548s 2.786157588s 2.791959869s 2.806270484s 2.813629219s 2.814291259s 2.833411381s 2.839974322s 2.854930006s 2.855247985s 2.862811311s 2.881470018s 2.89600138s 2.909697766s 2.911663547s 2.917675389s 2.934031002s 2.958836261s 2.979345486s 2.986766139s 2.992088861s 2.995040707s 2.99777492s 2.998880707s 3.002953188s 3.020303189s 3.022221588s 3.040344296s 3.042870723s 3.073964422s 3.081159223s 3.087634318s 3.119036029s 3.135504663s 3.159066628s 3.204965262s 3.210685386s 3.237931192s 3.255516931s 3.274944986s 3.292509783s 3.321554223s 3.335301705s 3.355460069s 3.390387449s 3.401337344s 3.430396006s 3.457271152s 3.465044051s 3.480604372s 3.520742862s 3.543795418s 3.549029261s 3.562585304s 3.621893738s 3.64889599s 3.665374407s 3.689726697s 3.736563896s 3.787465086s] Feb 12 12:58:24.566: INFO: 50 %ile: 2.547024511s Feb 12 12:58:24.566: INFO: 90 %ile: 3.292509783s Feb 12 12:58:24.566: INFO: 99 %ile: 3.736563896s Feb 12 12:58:24.566: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 12:58:24.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3727" for this suite. Feb 12 12:59:10.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 12:59:10.690: INFO: namespace svc-latency-3727 deletion completed in 46.112706927s • [SLOW TEST:91.737 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 12:59:10.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-a4db2acc-8a1e-4260-a203-e527c38e7589 STEP: Creating a pod to test consume configMaps Feb 12 12:59:10.918: INFO: Waiting up to 5m0s for pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3" in namespace "configmap-7574" to be "success or failure" Feb 12 12:59:11.072: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Pending", Reason="", readiness=false. Elapsed: 153.14907ms Feb 12 12:59:13.079: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.1602465s Feb 12 12:59:15.086: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167120739s Feb 12 12:59:17.094: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.175444404s Feb 12 12:59:19.102: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183880953s Feb 12 12:59:21.184: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Pending", Reason="", readiness=false. Elapsed: 10.265795336s Feb 12 12:59:23.190: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Pending", Reason="", readiness=false. Elapsed: 12.271591021s Feb 12 12:59:25.201: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.281992596s STEP: Saw pod success Feb 12 12:59:25.201: INFO: Pod "pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3" satisfied condition "success or failure" Feb 12 12:59:25.203: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3 container configmap-volume-test: STEP: delete the pod Feb 12 12:59:25.292: INFO: Waiting for pod pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3 to disappear Feb 12 12:59:25.296: INFO: Pod pod-configmaps-e0e7332c-56b1-4da7-9858-c78c8425eed3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 12:59:25.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7574" for this suite. Feb 12 12:59:32.017: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 12:59:32.221: INFO: namespace configmap-7574 deletion completed in 6.91916899s • [SLOW TEST:21.531 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 12:59:32.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 12 12:59:32.470: INFO: Waiting up to 5m0s for pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0" in namespace "downward-api-2335" to be "success or failure" Feb 12 12:59:32.624: INFO: Pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0": Phase="Pending", Reason="", readiness=false. Elapsed: 153.674128ms Feb 12 12:59:34.641: INFO: Pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.170459702s Feb 12 12:59:36.664: INFO: Pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.193919456s Feb 12 12:59:38.673: INFO: Pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203006753s Feb 12 12:59:40.688: INFO: Pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.217790865s Feb 12 12:59:42.710: INFO: Pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.239857383s STEP: Saw pod success Feb 12 12:59:42.710: INFO: Pod "downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0" satisfied condition "success or failure" Feb 12 12:59:42.724: INFO: Trying to get logs from node iruya-node pod downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0 container dapi-container: STEP: delete the pod Feb 12 12:59:42.821: INFO: Waiting for pod downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0 to disappear Feb 12 12:59:42.826: INFO: Pod downward-api-cef0f6e3-ea34-44bd-825c-3f6230d0c9f0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 12:59:42.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2335" for this suite. Feb 12 12:59:48.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 12:59:48.994: INFO: namespace downward-api-2335 deletion completed in 6.162256138s • [SLOW TEST:16.772 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 12:59:48.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Feb 12 12:59:49.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5915' Feb 12 12:59:51.255: INFO: stderr: "" Feb 12 12:59:51.255: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 12 12:59:51.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5915' Feb 12 12:59:51.455: INFO: stderr: "" Feb 12 12:59:51.455: INFO: stdout: "update-demo-nautilus-48bk2 update-demo-nautilus-kbqls " Feb 12 12:59:51.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-48bk2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5915' Feb 12 12:59:53.103: INFO: stderr: "" Feb 12 12:59:53.103: INFO: stdout: "" Feb 12 12:59:53.103: INFO: update-demo-nautilus-48bk2 is created but not running Feb 12 12:59:58.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5915' Feb 12 12:59:58.311: INFO: stderr: "" Feb 12 12:59:58.311: INFO: stdout: "update-demo-nautilus-48bk2 update-demo-nautilus-kbqls " Feb 12 12:59:58.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-48bk2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5915' Feb 12 12:59:58.608: INFO: stderr: "" Feb 12 12:59:58.608: INFO: stdout: "" Feb 12 12:59:58.608: INFO: update-demo-nautilus-48bk2 is created but not running Feb 12 13:00:03.609: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5915' Feb 12 13:00:03.823: INFO: stderr: "" Feb 12 13:00:03.823: INFO: stdout: "update-demo-nautilus-48bk2 update-demo-nautilus-kbqls " Feb 12 13:00:03.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-48bk2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5915' Feb 12 13:00:04.328: INFO: stderr: "" Feb 12 13:00:04.329: INFO: stdout: "" Feb 12 13:00:04.329: INFO: update-demo-nautilus-48bk2 is created but not running Feb 12 13:00:09.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5915' Feb 12 13:00:09.521: INFO: stderr: "" Feb 12 13:00:09.521: INFO: stdout: "update-demo-nautilus-48bk2 update-demo-nautilus-kbqls " Feb 12 13:00:09.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-48bk2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5915' Feb 12 13:00:09.671: INFO: stderr: "" Feb 12 13:00:09.672: INFO: stdout: "true" Feb 12 13:00:09.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-48bk2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5915' Feb 12 13:00:09.770: INFO: stderr: "" Feb 12 13:00:09.770: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 13:00:09.771: INFO: validating pod update-demo-nautilus-48bk2 Feb 12 13:00:09.824: INFO: got data: { "image": "nautilus.jpg" } Feb 12 13:00:09.824: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 13:00:09.824: INFO: update-demo-nautilus-48bk2 is verified up and running Feb 12 13:00:09.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbqls -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5915' Feb 12 13:00:09.908: INFO: stderr: "" Feb 12 13:00:09.908: INFO: stdout: "true" Feb 12 13:00:09.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbqls -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5915' Feb 12 13:00:10.043: INFO: stderr: "" Feb 12 13:00:10.043: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 13:00:10.043: INFO: validating pod update-demo-nautilus-kbqls Feb 12 13:00:10.077: INFO: got data: { "image": "nautilus.jpg" } Feb 12 13:00:10.077: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 13:00:10.077: INFO: update-demo-nautilus-kbqls is verified up and running STEP: using delete to clean up resources Feb 12 13:00:10.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5915' Feb 12 13:00:10.217: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 12 13:00:10.217: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 12 13:00:10.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5915' Feb 12 13:00:10.351: INFO: stderr: "No resources found.\n" Feb 12 13:00:10.351: INFO: stdout: "" Feb 12 13:00:10.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5915 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 12 13:00:10.494: INFO: stderr: "" Feb 12 13:00:10.494: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:00:10.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5915" for this suite. Feb 12 13:00:34.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:00:34.766: INFO: namespace kubectl-5915 deletion completed in 24.262399439s • [SLOW TEST:45.771 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:00:34.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 12 13:00:35.056: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 12 13:00:35.174: INFO: Number of nodes with available pods: 0 Feb 12 13:00:35.174: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 12 13:00:35.636: INFO: Number of nodes with available pods: 0 Feb 12 13:00:35.636: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:36.690: INFO: Number of nodes with available pods: 0 Feb 12 13:00:36.691: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:37.644: INFO: Number of nodes with available pods: 0 Feb 12 13:00:37.644: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:38.655: INFO: Number of nodes with available pods: 0 Feb 12 13:00:38.655: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:39.645: INFO: Number of nodes with available pods: 0 Feb 12 13:00:39.645: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:40.645: INFO: Number of nodes with available pods: 0 Feb 12 13:00:40.645: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:41.646: INFO: Number of nodes with available pods: 0 Feb 12 13:00:41.646: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:43.333: INFO: Number of nodes with available pods: 0 Feb 12 13:00:43.333: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:43.645: INFO: Number of nodes with available pods: 0 Feb 12 13:00:43.645: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:44.926: INFO: Number of nodes with available pods: 0 Feb 12 13:00:44.926: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:45.645: INFO: Number of nodes with available pods: 0 Feb 12 13:00:45.645: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:46.681: INFO: Number of nodes with available pods: 1 Feb 12 13:00:46.681: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 12 13:00:46.740: INFO: Number of nodes with available pods: 1 Feb 12 13:00:46.740: INFO: Number of running nodes: 0, number of available pods: 1 Feb 12 13:00:47.755: INFO: Number of nodes with available pods: 0 Feb 12 13:00:47.755: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 12 13:00:47.779: INFO: Number of nodes with available pods: 0 Feb 12 13:00:47.779: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:48.793: INFO: Number of nodes with available pods: 0 Feb 12 13:00:48.793: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:49.789: INFO: Number of nodes with available pods: 0 Feb 12 13:00:49.789: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:50.796: INFO: Number of nodes with available pods: 0 Feb 12 13:00:50.796: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:51.788: INFO: Number of nodes with available pods: 0 Feb 12 13:00:51.788: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:52.790: INFO: Number of nodes with available pods: 0 Feb 12 13:00:52.790: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:53.787: INFO: Number of nodes with available pods: 0 Feb 12 13:00:53.787: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:54.789: INFO: Number of nodes with available pods: 0 Feb 12 13:00:54.789: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:55.790: INFO: Number of nodes with available pods: 0 Feb 12 13:00:55.791: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:56.795: INFO: Number of nodes with available pods: 0 Feb 12 13:00:56.796: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:57.789: INFO: Number of nodes with available pods: 0 Feb 12 13:00:57.789: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:58.789: INFO: Number of nodes with available pods: 0 Feb 12 13:00:58.789: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:00:59.786: INFO: Number of nodes with available pods: 0 Feb 12 13:00:59.786: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:01:00.788: INFO: Number of nodes with available pods: 0 Feb 12 13:01:00.788: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:01:01.814: INFO: Number of nodes with available pods: 0 Feb 12 13:01:01.814: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:01:02.798: INFO: Number of nodes with available pods: 0 Feb 12 13:01:02.798: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:01:03.803: INFO: Number of nodes with available pods: 0 Feb 12 13:01:03.803: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:01:04.790: INFO: Number of nodes with available pods: 1 Feb 12 13:01:04.790: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-601, will wait for the garbage collector to delete the pods Feb 12 13:01:04.879: INFO: Deleting DaemonSet.extensions daemon-set took: 19.890369ms Feb 12 13:01:05.180: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.406122ms Feb 12 13:01:11.089: INFO: Number of nodes with available pods: 0 Feb 12 13:01:11.089: INFO: Number of running nodes: 0, number of available pods: 0 Feb 12 13:01:11.097: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-601/daemonsets","resourceVersion":"24069853"},"items":null} Feb 12 13:01:11.101: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-601/pods","resourceVersion":"24069853"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:01:11.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-601" for this suite. Feb 12 13:01:17.179: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:01:17.298: INFO: namespace daemonsets-601 deletion completed in 6.146764546s • [SLOW TEST:42.532 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:01:17.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 12 13:01:17.394: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73" in namespace "projected-5303" to be "success or failure" Feb 12 13:01:17.418: INFO: Pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73": Phase="Pending", Reason="", readiness=false. Elapsed: 23.238491ms Feb 12 13:01:19.426: INFO: Pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031832676s Feb 12 13:01:21.434: INFO: Pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039756228s Feb 12 13:01:23.448: INFO: Pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053120001s Feb 12 13:01:25.469: INFO: Pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73": Phase="Running", Reason="", readiness=true. Elapsed: 8.074216942s Feb 12 13:01:27.477: INFO: Pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082423922s STEP: Saw pod success Feb 12 13:01:27.477: INFO: Pod "downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73" satisfied condition "success or failure" Feb 12 13:01:27.481: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73 container client-container: STEP: delete the pod Feb 12 13:01:27.604: INFO: Waiting for pod downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73 to disappear Feb 12 13:01:27.614: INFO: Pod downwardapi-volume-05cbceef-781b-4f2e-af85-7aeac6a94e73 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:01:27.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5303" for this suite. Feb 12 13:01:33.645: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:01:33.774: INFO: namespace projected-5303 deletion completed in 6.153858904s • [SLOW TEST:16.475 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:01:33.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-adc80c7c-4c4e-4fd8-babd-335bf65df458 STEP: Creating a pod to test consume secrets Feb 12 13:01:34.144: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c" in namespace "projected-9597" to be "success or failure" Feb 12 13:01:34.156: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.320418ms Feb 12 13:01:36.164: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020087657s Feb 12 13:01:38.178: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03341153s Feb 12 13:01:40.188: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043645675s Feb 12 13:01:42.201: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057284793s Feb 12 13:01:44.209: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.065195231s Feb 12 13:01:46.216: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.072048485s STEP: Saw pod success Feb 12 13:01:46.216: INFO: Pod "pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c" satisfied condition "success or failure" Feb 12 13:01:46.219: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c container projected-secret-volume-test: STEP: delete the pod Feb 12 13:01:46.438: INFO: Waiting for pod pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c to disappear Feb 12 13:01:46.448: INFO: Pod pod-projected-secrets-f5b6ec61-d1e6-46fc-992e-5e3fbab95f2c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:01:46.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9597" for this suite. Feb 12 13:01:52.507: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:01:52.682: INFO: namespace projected-9597 deletion completed in 6.209553231s • [SLOW TEST:18.908 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:01:52.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-bb33293e-ce13-41b9-afa2-9a3fb3551c25 STEP: Creating secret with name s-test-opt-upd-6fb24702-74a7-4310-ba9a-6d9f7b70b5ef STEP: Creating the pod STEP: Deleting secret s-test-opt-del-bb33293e-ce13-41b9-afa2-9a3fb3551c25 STEP: Updating secret s-test-opt-upd-6fb24702-74a7-4310-ba9a-6d9f7b70b5ef STEP: Creating secret with name s-test-opt-create-ef447de3-0c0a-4767-b55a-a91151fd0b98 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:03:28.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4686" for this suite. Feb 12 13:03:50.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:03:50.962: INFO: namespace secrets-4686 deletion completed in 22.130480336s • [SLOW TEST:118.279 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:03:50.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9375 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 12 13:03:51.032: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 12 13:04:33.337: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9375 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 13:04:33.337: INFO: >>> kubeConfig: /root/.kube/config I0212 13:04:33.430007 8 log.go:172] (0xc0009da8f0) (0xc0019e2780) Create stream I0212 13:04:33.430115 8 log.go:172] (0xc0009da8f0) (0xc0019e2780) Stream added, broadcasting: 1 I0212 13:04:33.440751 8 log.go:172] (0xc0009da8f0) Reply frame received for 1 I0212 13:04:33.440823 8 log.go:172] (0xc0009da8f0) (0xc001c74aa0) Create stream I0212 13:04:33.440837 8 log.go:172] (0xc0009da8f0) (0xc001c74aa0) Stream added, broadcasting: 3 I0212 13:04:33.445411 8 log.go:172] (0xc0009da8f0) Reply frame received for 3 I0212 13:04:33.445548 8 log.go:172] (0xc0009da8f0) (0xc000a3c140) Create stream I0212 13:04:33.445582 8 log.go:172] (0xc0009da8f0) (0xc000a3c140) Stream added, broadcasting: 5 I0212 13:04:33.449335 8 log.go:172] (0xc0009da8f0) Reply frame received for 5 I0212 13:04:34.624707 8 log.go:172] (0xc0009da8f0) Data frame received for 3 I0212 13:04:34.624837 8 log.go:172] (0xc001c74aa0) (3) Data frame handling I0212 13:04:34.624865 8 log.go:172] (0xc001c74aa0) (3) Data frame sent I0212 13:04:34.782184 8 log.go:172] (0xc0009da8f0) (0xc001c74aa0) Stream removed, broadcasting: 3 I0212 13:04:34.782500 8 log.go:172] (0xc0009da8f0) (0xc000a3c140) Stream removed, broadcasting: 5 I0212 13:04:34.782601 8 log.go:172] (0xc0009da8f0) Data frame received for 1 I0212 13:04:34.782633 8 log.go:172] (0xc0019e2780) (1) Data frame handling I0212 13:04:34.782694 8 log.go:172] (0xc0019e2780) (1) Data frame sent I0212 13:04:34.782729 8 log.go:172] (0xc0009da8f0) (0xc0019e2780) Stream removed, broadcasting: 1 I0212 13:04:34.782772 8 log.go:172] (0xc0009da8f0) Go away received I0212 13:04:34.783040 8 log.go:172] (0xc0009da8f0) (0xc0019e2780) Stream removed, broadcasting: 1 I0212 13:04:34.783073 8 log.go:172] (0xc0009da8f0) (0xc001c74aa0) Stream removed, broadcasting: 3 I0212 13:04:34.783096 8 log.go:172] (0xc0009da8f0) (0xc000a3c140) Stream removed, broadcasting: 5 Feb 12 13:04:34.783: INFO: Found all expected endpoints: [netserver-0] Feb 12 13:04:34.792: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9375 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 13:04:34.792: INFO: >>> kubeConfig: /root/.kube/config I0212 13:04:34.847646 8 log.go:172] (0xc0000edc30) (0xc0025f4780) Create stream I0212 13:04:34.847790 8 log.go:172] (0xc0000edc30) (0xc0025f4780) Stream added, broadcasting: 1 I0212 13:04:34.856430 8 log.go:172] (0xc0000edc30) Reply frame received for 1 I0212 13:04:34.856461 8 log.go:172] (0xc0000edc30) (0xc001c74b40) Create stream I0212 13:04:34.856468 8 log.go:172] (0xc0000edc30) (0xc001c74b40) Stream added, broadcasting: 3 I0212 13:04:34.857613 8 log.go:172] (0xc0000edc30) Reply frame received for 3 I0212 13:04:34.857635 8 log.go:172] (0xc0000edc30) (0xc0019e2960) Create stream I0212 13:04:34.857644 8 log.go:172] (0xc0000edc30) (0xc0019e2960) Stream added, broadcasting: 5 I0212 13:04:34.859164 8 log.go:172] (0xc0000edc30) Reply frame received for 5 I0212 13:04:35.974286 8 log.go:172] (0xc0000edc30) Data frame received for 3 I0212 13:04:35.974402 8 log.go:172] (0xc001c74b40) (3) Data frame handling I0212 13:04:35.974422 8 log.go:172] (0xc001c74b40) (3) Data frame sent I0212 13:04:36.148756 8 log.go:172] (0xc0000edc30) Data frame received for 1 I0212 13:04:36.148924 8 log.go:172] (0xc0025f4780) (1) Data frame handling I0212 13:04:36.148967 8 log.go:172] (0xc0025f4780) (1) Data frame sent I0212 13:04:36.149625 8 log.go:172] (0xc0000edc30) (0xc0025f4780) Stream removed, broadcasting: 1 I0212 13:04:36.150175 8 log.go:172] (0xc0000edc30) (0xc001c74b40) Stream removed, broadcasting: 3 I0212 13:04:36.150258 8 log.go:172] (0xc0000edc30) (0xc0019e2960) Stream removed, broadcasting: 5 I0212 13:04:36.150349 8 log.go:172] (0xc0000edc30) (0xc0025f4780) Stream removed, broadcasting: 1 I0212 13:04:36.150400 8 log.go:172] (0xc0000edc30) (0xc001c74b40) Stream removed, broadcasting: 3 I0212 13:04:36.150432 8 log.go:172] (0xc0000edc30) (0xc0019e2960) Stream removed, broadcasting: 5 Feb 12 13:04:36.151: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:04:36.152: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9375" for this suite. Feb 12 13:04:58.223: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:04:58.306: INFO: namespace pod-network-test-9375 deletion completed in 22.145201231s • [SLOW TEST:67.345 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:04:58.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-9deb8080-9e8e-4486-837e-60264f66396b STEP: Creating a pod to test consume configMaps Feb 12 13:04:58.511: INFO: Waiting up to 5m0s for pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65" in namespace "configmap-5659" to be "success or failure" Feb 12 13:04:58.523: INFO: Pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65": Phase="Pending", Reason="", readiness=false. Elapsed: 12.402976ms Feb 12 13:05:00.536: INFO: Pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02497141s Feb 12 13:05:02.548: INFO: Pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037258548s Feb 12 13:05:04.565: INFO: Pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054207585s Feb 12 13:05:06.577: INFO: Pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65": Phase="Running", Reason="", readiness=true. Elapsed: 8.066116278s Feb 12 13:05:08.673: INFO: Pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.162701889s STEP: Saw pod success Feb 12 13:05:08.674: INFO: Pod "pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65" satisfied condition "success or failure" Feb 12 13:05:08.695: INFO: Trying to get logs from node iruya-node pod pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65 container configmap-volume-test: STEP: delete the pod Feb 12 13:05:08.820: INFO: Waiting for pod pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65 to disappear Feb 12 13:05:08.828: INFO: Pod pod-configmaps-134a33c1-5de6-490d-a73f-76cff368ef65 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:05:08.829: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5659" for this suite. Feb 12 13:05:15.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:05:15.159: INFO: namespace configmap-5659 deletion completed in 6.323145249s • [SLOW TEST:16.852 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:05:15.162: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Feb 12 13:05:15.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7749' Feb 12 13:05:16.165: INFO: stderr: "" Feb 12 13:05:16.165: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Feb 12 13:05:17.174: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:05:17.174: INFO: Found 0 / 1 Feb 12 13:05:18.178: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:05:18.178: INFO: Found 0 / 1 Feb 12 13:05:19.177: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:05:19.178: INFO: Found 0 / 1 Feb 12 13:05:20.203: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:05:20.203: INFO: Found 0 / 1 Feb 12 13:05:21.174: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:05:21.175: INFO: Found 0 / 1 Feb 12 13:05:22.180: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:05:22.180: INFO: Found 0 / 1 Feb 12 13:05:23.175: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:05:23.175: INFO: Found 1 / 1 Feb 12 13:05:23.175: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Feb 12 13:05:23.180: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:05:23.180: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Feb 12 13:05:23.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d27b4 redis-master --namespace=kubectl-7749' Feb 12 13:05:23.416: INFO: stderr: "" Feb 12 13:05:23.416: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 Feb 13:05:22.506 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 Feb 13:05:22.506 # Server started, Redis version 3.2.12\n1:M 12 Feb 13:05:22.506 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 Feb 13:05:22.506 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Feb 12 13:05:23.417: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d27b4 redis-master --namespace=kubectl-7749 --tail=1' Feb 12 13:05:23.651: INFO: stderr: "" Feb 12 13:05:23.651: INFO: stdout: "1:M 12 Feb 13:05:22.506 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Feb 12 13:05:23.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d27b4 redis-master --namespace=kubectl-7749 --limit-bytes=1' Feb 12 13:05:23.783: INFO: stderr: "" Feb 12 13:05:23.783: INFO: stdout: " " STEP: exposing timestamps Feb 12 13:05:23.783: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d27b4 redis-master --namespace=kubectl-7749 --tail=1 --timestamps' Feb 12 13:05:23.964: INFO: stderr: "" Feb 12 13:05:23.964: INFO: stdout: "2020-02-12T13:05:22.508939388Z 1:M 12 Feb 13:05:22.506 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Feb 12 13:05:26.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d27b4 redis-master --namespace=kubectl-7749 --since=1s' Feb 12 13:05:26.806: INFO: stderr: "" Feb 12 13:05:26.806: INFO: stdout: "" Feb 12 13:05:26.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-d27b4 redis-master --namespace=kubectl-7749 --since=24h' Feb 12 13:05:26.984: INFO: stderr: "" Feb 12 13:05:26.985: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 12 Feb 13:05:22.506 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 12 Feb 13:05:22.506 # Server started, Redis version 3.2.12\n1:M 12 Feb 13:05:22.506 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 12 Feb 13:05:22.506 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Feb 12 13:05:26.985: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7749' Feb 12 13:05:27.145: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 12 13:05:27.145: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Feb 12 13:05:27.145: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-7749' Feb 12 13:05:27.285: INFO: stderr: "No resources found.\n" Feb 12 13:05:27.285: INFO: stdout: "" Feb 12 13:05:27.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-7749 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 12 13:05:27.449: INFO: stderr: "" Feb 12 13:05:27.449: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:05:27.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7749" for this suite. Feb 12 13:05:49.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:05:49.565: INFO: namespace kubectl-7749 deletion completed in 22.104374642s • [SLOW TEST:34.404 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:05:49.566: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 12 13:05:49.704: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Feb 12 13:05:49.768: INFO: Number of nodes with available pods: 0 Feb 12 13:05:49.768: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:05:52.046: INFO: Number of nodes with available pods: 0 Feb 12 13:05:52.046: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:05:53.485: INFO: Number of nodes with available pods: 0 Feb 12 13:05:53.485: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:05:53.864: INFO: Number of nodes with available pods: 0 Feb 12 13:05:53.864: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:05:54.791: INFO: Number of nodes with available pods: 0 Feb 12 13:05:54.791: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:05:57.641: INFO: Number of nodes with available pods: 0 Feb 12 13:05:57.641: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:05:58.295: INFO: Number of nodes with available pods: 0 Feb 12 13:05:58.296: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:05:58.780: INFO: Number of nodes with available pods: 0 Feb 12 13:05:58.780: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:05:59.778: INFO: Number of nodes with available pods: 0 Feb 12 13:05:59.778: INFO: Node iruya-node is running more than one daemon pod Feb 12 13:06:00.777: INFO: Number of nodes with available pods: 2 Feb 12 13:06:00.777: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Feb 12 13:06:00.852: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:00.852: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:01.920: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:01.921: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:02.919: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:02.919: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:03.916: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:03.916: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:04.917: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:04.917: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:05.918: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:05.918: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:06.920: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:06.920: INFO: Pod daemon-set-9jkjs is not available Feb 12 13:06:06.920: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:07.917: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:07.917: INFO: Pod daemon-set-9jkjs is not available Feb 12 13:06:07.917: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:08.918: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:08.919: INFO: Pod daemon-set-9jkjs is not available Feb 12 13:06:08.919: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:09.919: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:09.919: INFO: Pod daemon-set-9jkjs is not available Feb 12 13:06:09.919: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:10.925: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:10.925: INFO: Pod daemon-set-9jkjs is not available Feb 12 13:06:10.925: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:11.918: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:11.918: INFO: Pod daemon-set-9jkjs is not available Feb 12 13:06:11.918: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:12.917: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:12.917: INFO: Pod daemon-set-9jkjs is not available Feb 12 13:06:12.917: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:13.936: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:13.936: INFO: Pod daemon-set-9jkjs is not available Feb 12 13:06:13.936: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:14.918: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:14.918: INFO: Pod daemon-set-9jkjs is not available Feb 12 13:06:14.918: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:15.915: INFO: Wrong image for pod: daemon-set-9jkjs. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:15.915: INFO: Pod daemon-set-9jkjs is not available Feb 12 13:06:15.915: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:16.924: INFO: Pod daemon-set-b556s is not available Feb 12 13:06:16.924: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:17.915: INFO: Pod daemon-set-b556s is not available Feb 12 13:06:17.915: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:18.921: INFO: Pod daemon-set-b556s is not available Feb 12 13:06:18.921: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:19.929: INFO: Pod daemon-set-b556s is not available Feb 12 13:06:19.929: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:20.925: INFO: Pod daemon-set-b556s is not available Feb 12 13:06:20.925: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:21.915: INFO: Pod daemon-set-b556s is not available Feb 12 13:06:21.915: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:22.922: INFO: Pod daemon-set-b556s is not available Feb 12 13:06:22.922: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:24.056: INFO: Pod daemon-set-b556s is not available Feb 12 13:06:24.057: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:24.923: INFO: Pod daemon-set-b556s is not available Feb 12 13:06:24.923: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:25.953: INFO: Pod daemon-set-b556s is not available Feb 12 13:06:25.954: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:26.915: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:29.371: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:29.915: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:30.917: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:31.917: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:32.916: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:32.916: INFO: Pod daemon-set-qktjb is not available Feb 12 13:06:33.920: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:33.920: INFO: Pod daemon-set-qktjb is not available Feb 12 13:06:34.920: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:34.920: INFO: Pod daemon-set-qktjb is not available Feb 12 13:06:35.919: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:35.919: INFO: Pod daemon-set-qktjb is not available Feb 12 13:06:36.917: INFO: Wrong image for pod: daemon-set-qktjb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Feb 12 13:06:36.917: INFO: Pod daemon-set-qktjb is not available Feb 12 13:06:37.929: INFO: Pod daemon-set-n4jxx is not available STEP: Check that daemon pods are still running on every node of the cluster. Feb 12 13:06:37.958: INFO: Number of nodes with available pods: 1 Feb 12 13:06:37.958: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 12 13:06:38.988: INFO: Number of nodes with available pods: 1 Feb 12 13:06:38.988: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 12 13:06:39.977: INFO: Number of nodes with available pods: 1 Feb 12 13:06:39.977: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 12 13:06:40.996: INFO: Number of nodes with available pods: 1 Feb 12 13:06:40.996: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 12 13:06:42.673: INFO: Number of nodes with available pods: 1 Feb 12 13:06:42.673: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 12 13:06:43.081: INFO: Number of nodes with available pods: 1 Feb 12 13:06:43.081: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 12 13:06:44.039: INFO: Number of nodes with available pods: 1 Feb 12 13:06:44.039: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Feb 12 13:06:44.975: INFO: Number of nodes with available pods: 2 Feb 12 13:06:44.975: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-923, will wait for the garbage collector to delete the pods Feb 12 13:06:45.066: INFO: Deleting DaemonSet.extensions daemon-set took: 10.942501ms Feb 12 13:06:45.367: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.712101ms Feb 12 13:06:52.583: INFO: Number of nodes with available pods: 0 Feb 12 13:06:52.583: INFO: Number of running nodes: 0, number of available pods: 0 Feb 12 13:06:52.589: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-923/daemonsets","resourceVersion":"24070618"},"items":null} Feb 12 13:06:52.594: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-923/pods","resourceVersion":"24070618"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:06:52.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-923" for this suite. Feb 12 13:07:00.650: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:07:00.816: INFO: namespace daemonsets-923 deletion completed in 8.198253566s • [SLOW TEST:71.250 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:07:00.817: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-d1d71099-5f68-44f3-a554-a1f261d46402 in namespace container-probe-5012 Feb 12 13:07:08.946: INFO: Started pod liveness-d1d71099-5f68-44f3-a554-a1f261d46402 in namespace container-probe-5012 STEP: checking the pod's current state and verifying that restartCount is present Feb 12 13:07:08.952: INFO: Initial restart count of pod liveness-d1d71099-5f68-44f3-a554-a1f261d46402 is 0 Feb 12 13:07:25.165: INFO: Restart count of pod container-probe-5012/liveness-d1d71099-5f68-44f3-a554-a1f261d46402 is now 1 (16.21245024s elapsed) Feb 12 13:07:45.306: INFO: Restart count of pod container-probe-5012/liveness-d1d71099-5f68-44f3-a554-a1f261d46402 is now 2 (36.353921561s elapsed) Feb 12 13:08:07.437: INFO: Restart count of pod container-probe-5012/liveness-d1d71099-5f68-44f3-a554-a1f261d46402 is now 3 (58.484740246s elapsed) Feb 12 13:08:27.560: INFO: Restart count of pod container-probe-5012/liveness-d1d71099-5f68-44f3-a554-a1f261d46402 is now 4 (1m18.608089208s elapsed) Feb 12 13:09:25.875: INFO: Restart count of pod container-probe-5012/liveness-d1d71099-5f68-44f3-a554-a1f261d46402 is now 5 (2m16.922341974s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:09:25.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5012" for this suite. Feb 12 13:09:31.978: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:09:32.122: INFO: namespace container-probe-5012 deletion completed in 6.169465427s • [SLOW TEST:151.305 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:09:32.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0212 13:09:49.112879 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 12 13:09:49.113: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:09:49.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1729" for this suite. Feb 12 13:10:02.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:10:03.928: INFO: namespace gc-1729 deletion completed in 13.662657695s • [SLOW TEST:31.805 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:10:03.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7ca1a334-fc90-41d7-aa18-a7316df482a7 STEP: Creating a pod to test consume secrets Feb 12 13:10:06.218: INFO: Waiting up to 5m0s for pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88" in namespace "secrets-8790" to be "success or failure" Feb 12 13:10:06.892: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Pending", Reason="", readiness=false. Elapsed: 674.172344ms Feb 12 13:10:08.905: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.686377327s Feb 12 13:10:10.915: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.697079983s Feb 12 13:10:12.928: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.709449161s Feb 12 13:10:14.940: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Pending", Reason="", readiness=false. Elapsed: 8.721286141s Feb 12 13:10:16.948: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Pending", Reason="", readiness=false. Elapsed: 10.729434278s Feb 12 13:10:18.962: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Pending", Reason="", readiness=false. Elapsed: 12.743746197s Feb 12 13:10:20.969: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.750624251s STEP: Saw pod success Feb 12 13:10:20.969: INFO: Pod "pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88" satisfied condition "success or failure" Feb 12 13:10:20.973: INFO: Trying to get logs from node iruya-node pod pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88 container secret-env-test: STEP: delete the pod Feb 12 13:10:21.034: INFO: Waiting for pod pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88 to disappear Feb 12 13:10:21.091: INFO: Pod pod-secrets-ff6ba523-5247-48c3-be40-e3521f452f88 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:10:21.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8790" for this suite. Feb 12 13:10:27.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:10:27.244: INFO: namespace secrets-8790 deletion completed in 6.147963898s • [SLOW TEST:23.315 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:10:27.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Feb 12 13:10:27.429: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3118,SelfLink:/api/v1/namespaces/watch-3118/configmaps/e2e-watch-test-label-changed,UID:4cd59c75-bdf2-4eed-8575-861c8e922ae2,ResourceVersion:24071139,Generation:0,CreationTimestamp:2020-02-12 13:10:27 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 12 13:10:27.430: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3118,SelfLink:/api/v1/namespaces/watch-3118/configmaps/e2e-watch-test-label-changed,UID:4cd59c75-bdf2-4eed-8575-861c8e922ae2,ResourceVersion:24071140,Generation:0,CreationTimestamp:2020-02-12 13:10:27 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 12 13:10:27.430: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3118,SelfLink:/api/v1/namespaces/watch-3118/configmaps/e2e-watch-test-label-changed,UID:4cd59c75-bdf2-4eed-8575-861c8e922ae2,ResourceVersion:24071141,Generation:0,CreationTimestamp:2020-02-12 13:10:27 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Feb 12 13:10:37.529: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3118,SelfLink:/api/v1/namespaces/watch-3118/configmaps/e2e-watch-test-label-changed,UID:4cd59c75-bdf2-4eed-8575-861c8e922ae2,ResourceVersion:24071158,Generation:0,CreationTimestamp:2020-02-12 13:10:27 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 12 13:10:37.529: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3118,SelfLink:/api/v1/namespaces/watch-3118/configmaps/e2e-watch-test-label-changed,UID:4cd59c75-bdf2-4eed-8575-861c8e922ae2,ResourceVersion:24071159,Generation:0,CreationTimestamp:2020-02-12 13:10:27 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Feb 12 13:10:37.529: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3118,SelfLink:/api/v1/namespaces/watch-3118/configmaps/e2e-watch-test-label-changed,UID:4cd59c75-bdf2-4eed-8575-861c8e922ae2,ResourceVersion:24071160,Generation:0,CreationTimestamp:2020-02-12 13:10:27 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:10:37.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3118" for this suite. Feb 12 13:10:43.595: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:10:43.681: INFO: namespace watch-3118 deletion completed in 6.128106604s • [SLOW TEST:16.437 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:10:43.682: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6133.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6133.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6133.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6133.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 242.141.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.141.242_udp@PTR;check="$$(dig +tcp +noall +answer +search 242.141.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.141.242_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6133.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6133.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6133.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6133.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6133.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6133.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 242.141.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.141.242_udp@PTR;check="$$(dig +tcp +noall +answer +search 242.141.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.141.242_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 12 13:11:00.164: INFO: Unable to read wheezy_udp@dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.173: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.182: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.191: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.197: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.208: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.225: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.237: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.252: INFO: Unable to read 10.109.141.242_udp@PTR from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.258: INFO: Unable to read 10.109.141.242_tcp@PTR from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.264: INFO: Unable to read jessie_udp@dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.271: INFO: Unable to read jessie_tcp@dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.276: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.282: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.288: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.296: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-6133.svc.cluster.local from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.302: INFO: Unable to read jessie_udp@PodARecord from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.313: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.325: INFO: Unable to read 10.109.141.242_udp@PTR from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.333: INFO: Unable to read 10.109.141.242_tcp@PTR from pod dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c: the server could not find the requested resource (get pods dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c) Feb 12 13:11:00.333: INFO: Lookups using dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c failed for: [wheezy_udp@dns-test-service.dns-6133.svc.cluster.local wheezy_tcp@dns-test-service.dns-6133.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-6133.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-6133.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.109.141.242_udp@PTR 10.109.141.242_tcp@PTR jessie_udp@dns-test-service.dns-6133.svc.cluster.local jessie_tcp@dns-test-service.dns-6133.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6133.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-6133.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-6133.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.109.141.242_udp@PTR 10.109.141.242_tcp@PTR] Feb 12 13:11:05.539: INFO: DNS probes using dns-6133/dns-test-0b1d45aa-9ee5-484e-8197-bbf94d09c92c succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:11:05.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6133" for this suite. Feb 12 13:11:11.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:11:12.104: INFO: namespace dns-6133 deletion completed in 6.250785068s • [SLOW TEST:28.422 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:11:12.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Feb 12 13:11:22.846: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1131 pod-service-account-1e204a4d-525c-4a8c-a62d-fa33498256dd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Feb 12 13:11:26.718: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1131 pod-service-account-1e204a4d-525c-4a8c-a62d-fa33498256dd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Feb 12 13:11:27.162: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1131 pod-service-account-1e204a4d-525c-4a8c-a62d-fa33498256dd -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:11:27.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1131" for this suite. Feb 12 13:11:33.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:11:33.739: INFO: namespace svcaccounts-1131 deletion completed in 6.149887689s • [SLOW TEST:21.634 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:11:33.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 12 13:11:33.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1807' Feb 12 13:11:34.105: INFO: stderr: "" Feb 12 13:11:34.105: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Feb 12 13:11:34.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1807' Feb 12 13:11:40.753: INFO: stderr: "" Feb 12 13:11:40.753: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:11:40.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1807" for this suite. Feb 12 13:11:46.822: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:11:46.941: INFO: namespace kubectl-1807 deletion completed in 6.1727256s • [SLOW TEST:13.202 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:11:46.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1908 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 12 13:11:46.990: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 12 13:12:23.312: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1908 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 13:12:23.313: INFO: >>> kubeConfig: /root/.kube/config I0212 13:12:23.415260 8 log.go:172] (0xc0000ed760) (0xc0000ff400) Create stream I0212 13:12:23.415407 8 log.go:172] (0xc0000ed760) (0xc0000ff400) Stream added, broadcasting: 1 I0212 13:12:23.421618 8 log.go:172] (0xc0000ed760) Reply frame received for 1 I0212 13:12:23.421648 8 log.go:172] (0xc0000ed760) (0xc0009785a0) Create stream I0212 13:12:23.421655 8 log.go:172] (0xc0000ed760) (0xc0009785a0) Stream added, broadcasting: 3 I0212 13:12:23.423410 8 log.go:172] (0xc0000ed760) Reply frame received for 3 I0212 13:12:23.423448 8 log.go:172] (0xc0000ed760) (0xc000978820) Create stream I0212 13:12:23.423464 8 log.go:172] (0xc0000ed760) (0xc000978820) Stream added, broadcasting: 5 I0212 13:12:23.425434 8 log.go:172] (0xc0000ed760) Reply frame received for 5 I0212 13:12:23.703855 8 log.go:172] (0xc0000ed760) Data frame received for 3 I0212 13:12:23.703949 8 log.go:172] (0xc0009785a0) (3) Data frame handling I0212 13:12:23.703988 8 log.go:172] (0xc0009785a0) (3) Data frame sent I0212 13:12:23.831047 8 log.go:172] (0xc0000ed760) Data frame received for 1 I0212 13:12:23.831126 8 log.go:172] (0xc0000ed760) (0xc0009785a0) Stream removed, broadcasting: 3 I0212 13:12:23.831181 8 log.go:172] (0xc0000ff400) (1) Data frame handling I0212 13:12:23.831195 8 log.go:172] (0xc0000ff400) (1) Data frame sent I0212 13:12:23.831206 8 log.go:172] (0xc0000ed760) (0xc0000ff400) Stream removed, broadcasting: 1 I0212 13:12:23.831277 8 log.go:172] (0xc0000ed760) (0xc000978820) Stream removed, broadcasting: 5 I0212 13:12:23.831351 8 log.go:172] (0xc0000ed760) Go away received I0212 13:12:23.831398 8 log.go:172] (0xc0000ed760) (0xc0000ff400) Stream removed, broadcasting: 1 I0212 13:12:23.831413 8 log.go:172] (0xc0000ed760) (0xc0009785a0) Stream removed, broadcasting: 3 I0212 13:12:23.831419 8 log.go:172] (0xc0000ed760) (0xc000978820) Stream removed, broadcasting: 5 Feb 12 13:12:23.831: INFO: Found all expected endpoints: [netserver-0] Feb 12 13:12:23.840: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1908 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 13:12:23.840: INFO: >>> kubeConfig: /root/.kube/config I0212 13:12:23.920797 8 log.go:172] (0xc001b313f0) (0xc002b80320) Create stream I0212 13:12:23.921125 8 log.go:172] (0xc001b313f0) (0xc002b80320) Stream added, broadcasting: 1 I0212 13:12:23.932551 8 log.go:172] (0xc001b313f0) Reply frame received for 1 I0212 13:12:23.932633 8 log.go:172] (0xc001b313f0) (0xc0009788c0) Create stream I0212 13:12:23.932647 8 log.go:172] (0xc001b313f0) (0xc0009788c0) Stream added, broadcasting: 3 I0212 13:12:23.936414 8 log.go:172] (0xc001b313f0) Reply frame received for 3 I0212 13:12:23.936445 8 log.go:172] (0xc001b313f0) (0xc000978aa0) Create stream I0212 13:12:23.936455 8 log.go:172] (0xc001b313f0) (0xc000978aa0) Stream added, broadcasting: 5 I0212 13:12:23.943203 8 log.go:172] (0xc001b313f0) Reply frame received for 5 I0212 13:12:24.140598 8 log.go:172] (0xc001b313f0) Data frame received for 3 I0212 13:12:24.140648 8 log.go:172] (0xc0009788c0) (3) Data frame handling I0212 13:12:24.140660 8 log.go:172] (0xc0009788c0) (3) Data frame sent I0212 13:12:24.241486 8 log.go:172] (0xc001b313f0) Data frame received for 1 I0212 13:12:24.241535 8 log.go:172] (0xc001b313f0) (0xc0009788c0) Stream removed, broadcasting: 3 I0212 13:12:24.241594 8 log.go:172] (0xc002b80320) (1) Data frame handling I0212 13:12:24.241610 8 log.go:172] (0xc002b80320) (1) Data frame sent I0212 13:12:24.241618 8 log.go:172] (0xc001b313f0) (0xc002b80320) Stream removed, broadcasting: 1 I0212 13:12:24.241765 8 log.go:172] (0xc001b313f0) (0xc000978aa0) Stream removed, broadcasting: 5 I0212 13:12:24.241816 8 log.go:172] (0xc001b313f0) (0xc002b80320) Stream removed, broadcasting: 1 I0212 13:12:24.241829 8 log.go:172] (0xc001b313f0) (0xc0009788c0) Stream removed, broadcasting: 3 I0212 13:12:24.241836 8 log.go:172] (0xc001b313f0) (0xc000978aa0) Stream removed, broadcasting: 5 I0212 13:12:24.242002 8 log.go:172] (0xc001b313f0) Go away received Feb 12 13:12:24.242: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:12:24.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1908" for this suite. Feb 12 13:12:48.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:12:48.370: INFO: namespace pod-network-test-1908 deletion completed in 24.120861001s • [SLOW TEST:61.429 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:12:48.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-6112/configmap-test-464a7152-edf7-409e-88bc-eb3487892076 STEP: Creating a pod to test consume configMaps Feb 12 13:12:48.598: INFO: Waiting up to 5m0s for pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c" in namespace "configmap-6112" to be "success or failure" Feb 12 13:12:48.607: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.96896ms Feb 12 13:12:50.615: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016473491s Feb 12 13:12:52.634: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03613732s Feb 12 13:12:54.642: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043566666s Feb 12 13:12:56.656: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057515892s Feb 12 13:12:58.680: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.081903027s Feb 12 13:13:00.688: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.089820879s Feb 12 13:13:02.713: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.114520941s STEP: Saw pod success Feb 12 13:13:02.713: INFO: Pod "pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c" satisfied condition "success or failure" Feb 12 13:13:02.722: INFO: Trying to get logs from node iruya-node pod pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c container env-test: STEP: delete the pod Feb 12 13:13:02.817: INFO: Waiting for pod pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c to disappear Feb 12 13:13:02.905: INFO: Pod pod-configmaps-5917ae2b-ecf7-462e-bec1-55392c3f1a4c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:13:02.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6112" for this suite. Feb 12 13:13:08.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:13:09.076: INFO: namespace configmap-6112 deletion completed in 6.161347328s • [SLOW TEST:20.706 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:13:09.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-e62903bc-fa5a-4e43-8b1f-6442f18fc782 STEP: Creating a pod to test consume secrets Feb 12 13:13:10.310: INFO: Waiting up to 5m0s for pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b" in namespace "secrets-5626" to be "success or failure" Feb 12 13:13:10.337: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 25.806383ms Feb 12 13:13:12.350: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039233742s Feb 12 13:13:14.362: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050372683s Feb 12 13:13:16.440: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128975449s Feb 12 13:13:18.452: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.14113389s Feb 12 13:13:20.469: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.157829393s Feb 12 13:13:22.482: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.170539365s Feb 12 13:13:24.492: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.1805921s STEP: Saw pod success Feb 12 13:13:24.492: INFO: Pod "pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b" satisfied condition "success or failure" Feb 12 13:13:24.497: INFO: Trying to get logs from node iruya-node pod pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b container secret-volume-test: STEP: delete the pod Feb 12 13:13:25.018: INFO: Waiting for pod pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b to disappear Feb 12 13:13:25.029: INFO: Pod pod-secrets-99c1b693-616d-44e9-91d6-15619e6dfd7b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:13:25.029: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5626" for this suite. Feb 12 13:13:31.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:13:31.150: INFO: namespace secrets-5626 deletion completed in 6.117368072s STEP: Destroying namespace "secret-namespace-5525" for this suite. Feb 12 13:13:37.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:13:37.279: INFO: namespace secret-namespace-5525 deletion completed in 6.128998339s • [SLOW TEST:28.203 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:13:37.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 12 13:13:37.485: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:13:38.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-1949" for this suite. Feb 12 13:13:44.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:13:44.857: INFO: namespace custom-resource-definition-1949 deletion completed in 6.171890708s • [SLOW TEST:7.577 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:13:44.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-faa378e3-3104-48d2-9ba5-9924c74ea759 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:13:45.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4343" for this suite. Feb 12 13:13:52.258: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:13:52.558: INFO: namespace configmap-4343 deletion completed in 6.601563964s • [SLOW TEST:7.700 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:13:52.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Feb 12 13:13:52.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1839' Feb 12 13:13:53.104: INFO: stderr: "" Feb 12 13:13:53.104: INFO: stdout: "pod/pause created\n" Feb 12 13:13:53.104: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Feb 12 13:13:53.104: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-1839" to be "running and ready" Feb 12 13:13:53.203: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 98.622774ms Feb 12 13:13:55.209: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10490732s Feb 12 13:13:57.218: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113573243s Feb 12 13:13:59.310: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.205959542s Feb 12 13:14:01.319: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.214238414s Feb 12 13:14:03.327: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 10.22242574s Feb 12 13:14:05.336: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 12.231449914s Feb 12 13:14:07.371: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 14.266613148s Feb 12 13:14:09.381: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 16.276474918s Feb 12 13:14:09.381: INFO: Pod "pause" satisfied condition "running and ready" Feb 12 13:14:09.381: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Feb 12 13:14:09.381: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-1839' Feb 12 13:14:09.548: INFO: stderr: "" Feb 12 13:14:09.548: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Feb 12 13:14:09.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1839' Feb 12 13:14:09.674: INFO: stderr: "" Feb 12 13:14:09.674: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 16s testing-label-value\n" STEP: removing the label testing-label of a pod Feb 12 13:14:09.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-1839' Feb 12 13:14:09.955: INFO: stderr: "" Feb 12 13:14:09.955: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Feb 12 13:14:09.955: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-1839' Feb 12 13:14:10.206: INFO: stderr: "" Feb 12 13:14:10.207: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 17s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Feb 12 13:14:10.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1839' Feb 12 13:14:10.426: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 12 13:14:10.426: INFO: stdout: "pod \"pause\" force deleted\n" Feb 12 13:14:10.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-1839' Feb 12 13:14:10.592: INFO: stderr: "No resources found.\n" Feb 12 13:14:10.592: INFO: stdout: "" Feb 12 13:14:10.592: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-1839 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 12 13:14:10.698: INFO: stderr: "" Feb 12 13:14:10.698: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:14:10.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1839" for this suite. Feb 12 13:14:19.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:14:19.688: INFO: namespace kubectl-1839 deletion completed in 8.977252292s • [SLOW TEST:27.130 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:14:19.689: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:15:26.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8119" for this suite. Feb 12 13:15:32.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:15:33.059: INFO: namespace container-runtime-8119 deletion completed in 6.114435451s • [SLOW TEST:73.371 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:15:33.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Feb 12 13:15:43.139: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-6af2870a-7ec0-48e0-bfcf-07f6fa3f3e01,GenerateName:,Namespace:events-2146,SelfLink:/api/v1/namespaces/events-2146/pods/send-events-6af2870a-7ec0-48e0-bfcf-07f6fa3f3e01,UID:b049bd56-2162-4b6c-8175-fa8712f394d0,ResourceVersion:24071928,Generation:0,CreationTimestamp:2020-02-12 13:15:33 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 96805034,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-pktnz {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-pktnz,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-pktnz true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001dca330} {node.kubernetes.io/unreachable Exists NoExecute 0xc001dca3c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:15:33 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:15:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:15:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:15:33 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-02-12 13:15:33 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-02-12 13:15:41 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://550fea34cbc17e0a392eaf5d0a7a7d1afe09df77eb5d5d8f78f75610b4a1cc05}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Feb 12 13:15:45.146: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Feb 12 13:15:47.161: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:15:47.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2146" for this suite. Feb 12 13:16:25.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:16:25.370: INFO: namespace events-2146 deletion completed in 38.186043346s • [SLOW TEST:52.310 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:16:25.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Feb 12 13:16:25.563: INFO: Waiting up to 5m0s for pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06" in namespace "emptydir-8580" to be "success or failure" Feb 12 13:16:25.573: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06": Phase="Pending", Reason="", readiness=false. Elapsed: 9.791268ms Feb 12 13:16:27.581: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018019865s Feb 12 13:16:29.596: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032420377s Feb 12 13:16:31.605: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042014768s Feb 12 13:16:33.617: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053435591s Feb 12 13:16:35.630: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06": Phase="Running", Reason="", readiness=true. Elapsed: 10.066561827s Feb 12 13:16:37.646: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.082312333s STEP: Saw pod success Feb 12 13:16:37.646: INFO: Pod "pod-60f16972-4cb1-4779-adca-a6b6bbcffd06" satisfied condition "success or failure" Feb 12 13:16:37.650: INFO: Trying to get logs from node iruya-node pod pod-60f16972-4cb1-4779-adca-a6b6bbcffd06 container test-container: STEP: delete the pod Feb 12 13:16:37.811: INFO: Waiting for pod pod-60f16972-4cb1-4779-adca-a6b6bbcffd06 to disappear Feb 12 13:16:37.818: INFO: Pod pod-60f16972-4cb1-4779-adca-a6b6bbcffd06 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:16:37.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8580" for this suite. Feb 12 13:16:43.885: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:16:43.987: INFO: namespace emptydir-8580 deletion completed in 6.158892069s • [SLOW TEST:18.617 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:16:43.989: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Feb 12 13:16:44.060: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:17:01.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5111" for this suite. Feb 12 13:17:07.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:17:07.734: INFO: namespace pods-5111 deletion completed in 6.206802883s • [SLOW TEST:23.745 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:17:07.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-95565cf8-bd23-4a3f-badc-9d6bdf5def76 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:17:21.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8449" for this suite. Feb 12 13:17:43.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:17:44.076: INFO: namespace configmap-8449 deletion completed in 22.106164699s • [SLOW TEST:36.342 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:17:44.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 12 13:17:53.507: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:17:53.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9999" for this suite. Feb 12 13:17:59.646: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:17:59.760: INFO: namespace container-runtime-9999 deletion completed in 6.135654061s • [SLOW TEST:15.683 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:17:59.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 12 13:17:59.833: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:18:20.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6940" for this suite. Feb 12 13:18:42.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:18:42.975: INFO: namespace init-container-6940 deletion completed in 22.136559656s • [SLOW TEST:43.215 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:18:42.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-tpnr STEP: Creating a pod to test atomic-volume-subpath Feb 12 13:18:43.063: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-tpnr" in namespace "subpath-9775" to be "success or failure" Feb 12 13:18:43.073: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Pending", Reason="", readiness=false. Elapsed: 9.919039ms Feb 12 13:18:45.084: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020904048s Feb 12 13:18:47.090: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027026348s Feb 12 13:18:49.096: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033147193s Feb 12 13:18:51.103: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039744497s Feb 12 13:18:53.111: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 10.047778418s Feb 12 13:18:55.119: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 12.05639394s Feb 12 13:18:57.128: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 14.064893331s Feb 12 13:18:59.138: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 16.074617518s Feb 12 13:19:01.144: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 18.080782719s Feb 12 13:19:03.155: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 20.09158794s Feb 12 13:19:05.162: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 22.099122234s Feb 12 13:19:07.172: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 24.109357337s Feb 12 13:19:09.182: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 26.119141897s Feb 12 13:19:11.193: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 28.130027118s Feb 12 13:19:13.201: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Running", Reason="", readiness=true. Elapsed: 30.138218529s Feb 12 13:19:15.211: INFO: Pod "pod-subpath-test-secret-tpnr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.147525699s STEP: Saw pod success Feb 12 13:19:15.211: INFO: Pod "pod-subpath-test-secret-tpnr" satisfied condition "success or failure" Feb 12 13:19:15.216: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-tpnr container test-container-subpath-secret-tpnr: STEP: delete the pod Feb 12 13:19:15.336: INFO: Waiting for pod pod-subpath-test-secret-tpnr to disappear Feb 12 13:19:15.344: INFO: Pod pod-subpath-test-secret-tpnr no longer exists STEP: Deleting pod pod-subpath-test-secret-tpnr Feb 12 13:19:15.344: INFO: Deleting pod "pod-subpath-test-secret-tpnr" in namespace "subpath-9775" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:19:15.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9775" for this suite. Feb 12 13:19:21.379: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:19:21.517: INFO: namespace subpath-9775 deletion completed in 6.164396418s • [SLOW TEST:38.541 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:19:21.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Feb 12 13:19:21.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-919' Feb 12 13:19:21.943: INFO: stderr: "" Feb 12 13:19:21.944: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 12 13:19:21.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919' Feb 12 13:19:22.090: INFO: stderr: "" Feb 12 13:19:22.090: INFO: stdout: "update-demo-nautilus-27sgz update-demo-nautilus-dn62b " Feb 12 13:19:22.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27sgz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919' Feb 12 13:19:22.308: INFO: stderr: "" Feb 12 13:19:22.308: INFO: stdout: "" Feb 12 13:19:22.308: INFO: update-demo-nautilus-27sgz is created but not running Feb 12 13:19:27.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919' Feb 12 13:19:27.429: INFO: stderr: "" Feb 12 13:19:27.429: INFO: stdout: "update-demo-nautilus-27sgz update-demo-nautilus-dn62b " Feb 12 13:19:27.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27sgz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919' Feb 12 13:19:28.956: INFO: stderr: "" Feb 12 13:19:28.956: INFO: stdout: "" Feb 12 13:19:28.956: INFO: update-demo-nautilus-27sgz is created but not running Feb 12 13:19:33.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919' Feb 12 13:19:34.199: INFO: stderr: "" Feb 12 13:19:34.199: INFO: stdout: "update-demo-nautilus-27sgz update-demo-nautilus-dn62b " Feb 12 13:19:34.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27sgz -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919' Feb 12 13:19:34.302: INFO: stderr: "" Feb 12 13:19:34.303: INFO: stdout: "true" Feb 12 13:19:34.303: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-27sgz -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-919' Feb 12 13:19:34.489: INFO: stderr: "" Feb 12 13:19:34.489: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 13:19:34.489: INFO: validating pod update-demo-nautilus-27sgz Feb 12 13:19:34.511: INFO: got data: { "image": "nautilus.jpg" } Feb 12 13:19:34.512: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 13:19:34.512: INFO: update-demo-nautilus-27sgz is verified up and running Feb 12 13:19:34.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dn62b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919' Feb 12 13:19:34.655: INFO: stderr: "" Feb 12 13:19:34.655: INFO: stdout: "true" Feb 12 13:19:34.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dn62b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-919' Feb 12 13:19:34.777: INFO: stderr: "" Feb 12 13:19:34.777: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 13:19:34.777: INFO: validating pod update-demo-nautilus-dn62b Feb 12 13:19:34.782: INFO: got data: { "image": "nautilus.jpg" } Feb 12 13:19:34.782: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 13:19:34.782: INFO: update-demo-nautilus-dn62b is verified up and running STEP: scaling down the replication controller Feb 12 13:19:34.784: INFO: scanned /root for discovery docs: Feb 12 13:19:34.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-919' Feb 12 13:19:35.953: INFO: stderr: "" Feb 12 13:19:35.953: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 12 13:19:35.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919' Feb 12 13:19:36.139: INFO: stderr: "" Feb 12 13:19:36.140: INFO: stdout: "update-demo-nautilus-27sgz update-demo-nautilus-dn62b " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 12 13:19:41.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919' Feb 12 13:19:41.281: INFO: stderr: "" Feb 12 13:19:41.281: INFO: stdout: "update-demo-nautilus-27sgz update-demo-nautilus-dn62b " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 12 13:19:46.282: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919' Feb 12 13:19:46.528: INFO: stderr: "" Feb 12 13:19:46.528: INFO: stdout: "update-demo-nautilus-27sgz update-demo-nautilus-dn62b " STEP: Replicas for name=update-demo: expected=1 actual=2 Feb 12 13:19:51.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919' Feb 12 13:19:51.715: INFO: stderr: "" Feb 12 13:19:51.715: INFO: stdout: "update-demo-nautilus-dn62b " Feb 12 13:19:51.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dn62b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919' Feb 12 13:19:51.847: INFO: stderr: "" Feb 12 13:19:51.847: INFO: stdout: "true" Feb 12 13:19:51.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dn62b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-919' Feb 12 13:19:51.986: INFO: stderr: "" Feb 12 13:19:51.986: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 13:19:51.986: INFO: validating pod update-demo-nautilus-dn62b Feb 12 13:19:51.992: INFO: got data: { "image": "nautilus.jpg" } Feb 12 13:19:51.992: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 13:19:51.992: INFO: update-demo-nautilus-dn62b is verified up and running STEP: scaling up the replication controller Feb 12 13:19:51.994: INFO: scanned /root for discovery docs: Feb 12 13:19:51.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-919' Feb 12 13:19:53.200: INFO: stderr: "" Feb 12 13:19:53.200: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Feb 12 13:19:53.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919' Feb 12 13:19:53.493: INFO: stderr: "" Feb 12 13:19:53.493: INFO: stdout: "update-demo-nautilus-89mfq update-demo-nautilus-dn62b " Feb 12 13:19:53.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89mfq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919' Feb 12 13:19:53.599: INFO: stderr: "" Feb 12 13:19:53.599: INFO: stdout: "" Feb 12 13:19:53.599: INFO: update-demo-nautilus-89mfq is created but not running Feb 12 13:19:58.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919' Feb 12 13:19:58.752: INFO: stderr: "" Feb 12 13:19:58.752: INFO: stdout: "update-demo-nautilus-89mfq update-demo-nautilus-dn62b " Feb 12 13:19:58.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89mfq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919' Feb 12 13:19:58.906: INFO: stderr: "" Feb 12 13:19:58.906: INFO: stdout: "" Feb 12 13:19:58.907: INFO: update-demo-nautilus-89mfq is created but not running Feb 12 13:20:03.907: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-919' Feb 12 13:20:04.173: INFO: stderr: "" Feb 12 13:20:04.173: INFO: stdout: "update-demo-nautilus-89mfq update-demo-nautilus-dn62b " Feb 12 13:20:04.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89mfq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919' Feb 12 13:20:04.270: INFO: stderr: "" Feb 12 13:20:04.271: INFO: stdout: "true" Feb 12 13:20:04.271: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-89mfq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-919' Feb 12 13:20:04.365: INFO: stderr: "" Feb 12 13:20:04.365: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 13:20:04.365: INFO: validating pod update-demo-nautilus-89mfq Feb 12 13:20:04.377: INFO: got data: { "image": "nautilus.jpg" } Feb 12 13:20:04.377: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 13:20:04.377: INFO: update-demo-nautilus-89mfq is verified up and running Feb 12 13:20:04.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dn62b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-919' Feb 12 13:20:04.465: INFO: stderr: "" Feb 12 13:20:04.465: INFO: stdout: "true" Feb 12 13:20:04.465: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dn62b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-919' Feb 12 13:20:04.599: INFO: stderr: "" Feb 12 13:20:04.599: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Feb 12 13:20:04.599: INFO: validating pod update-demo-nautilus-dn62b Feb 12 13:20:04.609: INFO: got data: { "image": "nautilus.jpg" } Feb 12 13:20:04.609: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Feb 12 13:20:04.609: INFO: update-demo-nautilus-dn62b is verified up and running STEP: using delete to clean up resources Feb 12 13:20:04.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-919' Feb 12 13:20:04.749: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 12 13:20:04.749: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Feb 12 13:20:04.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-919' Feb 12 13:20:04.876: INFO: stderr: "No resources found.\n" Feb 12 13:20:04.876: INFO: stdout: "" Feb 12 13:20:04.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-919 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Feb 12 13:20:05.022: INFO: stderr: "" Feb 12 13:20:05.023: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:20:05.023: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-919" for this suite. Feb 12 13:20:27.105: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:20:27.209: INFO: namespace kubectl-919 deletion completed in 22.144607857s • [SLOW TEST:65.692 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:20:27.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 12 13:20:27.274: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336" in namespace "projected-9737" to be "success or failure" Feb 12 13:20:27.294: INFO: Pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336": Phase="Pending", Reason="", readiness=false. Elapsed: 19.837452ms Feb 12 13:20:29.303: INFO: Pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029164506s Feb 12 13:20:31.313: INFO: Pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038899181s Feb 12 13:20:33.322: INFO: Pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048439464s Feb 12 13:20:35.330: INFO: Pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055893242s Feb 12 13:20:37.340: INFO: Pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065695023s STEP: Saw pod success Feb 12 13:20:37.340: INFO: Pod "downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336" satisfied condition "success or failure" Feb 12 13:20:37.344: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336 container client-container: STEP: delete the pod Feb 12 13:20:37.418: INFO: Waiting for pod downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336 to disappear Feb 12 13:20:37.435: INFO: Pod downwardapi-volume-e0e41fa5-fd5a-4818-9327-a70497c9e336 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:20:37.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9737" for this suite. Feb 12 13:20:45.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:20:45.759: INFO: namespace projected-9737 deletion completed in 8.315879791s • [SLOW TEST:18.549 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:20:45.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Feb 12 13:20:46.018: INFO: Waiting up to 5m0s for pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88" in namespace "emptydir-1557" to be "success or failure" Feb 12 13:20:46.040: INFO: Pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88": Phase="Pending", Reason="", readiness=false. Elapsed: 21.786878ms Feb 12 13:20:48.050: INFO: Pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031683712s Feb 12 13:20:50.061: INFO: Pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042862921s Feb 12 13:20:52.069: INFO: Pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050164536s Feb 12 13:20:54.078: INFO: Pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88": Phase="Running", Reason="", readiness=true. Elapsed: 8.059981852s Feb 12 13:20:56.086: INFO: Pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067804946s STEP: Saw pod success Feb 12 13:20:56.086: INFO: Pod "pod-6d290536-f01b-4e4b-ab43-e1988dd95b88" satisfied condition "success or failure" Feb 12 13:20:56.092: INFO: Trying to get logs from node iruya-node pod pod-6d290536-f01b-4e4b-ab43-e1988dd95b88 container test-container: STEP: delete the pod Feb 12 13:20:56.207: INFO: Waiting for pod pod-6d290536-f01b-4e4b-ab43-e1988dd95b88 to disappear Feb 12 13:20:56.215: INFO: Pod pod-6d290536-f01b-4e4b-ab43-e1988dd95b88 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:20:56.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1557" for this suite. Feb 12 13:21:02.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:21:02.334: INFO: namespace emptydir-1557 deletion completed in 6.110955862s • [SLOW TEST:16.575 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:21:02.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Feb 12 13:21:02.456: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9485,SelfLink:/api/v1/namespaces/watch-9485/configmaps/e2e-watch-test-watch-closed,UID:5ad6d4b0-20eb-4978-aba0-e3825f01b62a,ResourceVersion:24072666,Generation:0,CreationTimestamp:2020-02-12 13:21:02 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 12 13:21:02.456: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9485,SelfLink:/api/v1/namespaces/watch-9485/configmaps/e2e-watch-test-watch-closed,UID:5ad6d4b0-20eb-4978-aba0-e3825f01b62a,ResourceVersion:24072668,Generation:0,CreationTimestamp:2020-02-12 13:21:02 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Feb 12 13:21:02.479: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9485,SelfLink:/api/v1/namespaces/watch-9485/configmaps/e2e-watch-test-watch-closed,UID:5ad6d4b0-20eb-4978-aba0-e3825f01b62a,ResourceVersion:24072669,Generation:0,CreationTimestamp:2020-02-12 13:21:02 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 12 13:21:02.480: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-9485,SelfLink:/api/v1/namespaces/watch-9485/configmaps/e2e-watch-test-watch-closed,UID:5ad6d4b0-20eb-4978-aba0-e3825f01b62a,ResourceVersion:24072670,Generation:0,CreationTimestamp:2020-02-12 13:21:02 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:21:02.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9485" for this suite. Feb 12 13:21:08.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:21:08.680: INFO: namespace watch-9485 deletion completed in 6.194676695s • [SLOW TEST:6.346 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:21:08.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-7fx4t in namespace proxy-7982 I0212 13:21:08.800925 8 runners.go:180] Created replication controller with name: proxy-service-7fx4t, namespace: proxy-7982, replica count: 1 I0212 13:21:09.851739 8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 13:21:10.852119 8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 13:21:11.852484 8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 13:21:12.852959 8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 13:21:13.853411 8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 13:21:14.853742 8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 13:21:15.854064 8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 13:21:16.854789 8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0212 13:21:17.855586 8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0212 13:21:18.856046 8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0212 13:21:19.856656 8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0212 13:21:20.857246 8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0212 13:21:21.857894 8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0212 13:21:22.858675 8 runners.go:180] proxy-service-7fx4t Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 12 13:21:22.873: INFO: setup took 14.129266259s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Feb 12 13:21:22.958: INFO: (0) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 84.413049ms) Feb 12 13:21:22.959: INFO: (0) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 85.215119ms) Feb 12 13:21:22.961: INFO: (0) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 86.58373ms) Feb 12 13:21:22.961: INFO: (0) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 87.469865ms) Feb 12 13:21:22.965: INFO: (0) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 90.729219ms) Feb 12 13:21:22.965: INFO: (0) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 91.446911ms) Feb 12 13:21:22.965: INFO: (0) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 91.351239ms) Feb 12 13:21:22.965: INFO: (0) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 91.601583ms) Feb 12 13:21:22.966: INFO: (0) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 91.516793ms) Feb 12 13:21:22.966: INFO: (0) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 91.550161ms) Feb 12 13:21:22.968: INFO: (0) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 94.126632ms) Feb 12 13:21:23.038: INFO: (0) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 163.787829ms) Feb 12 13:21:23.038: INFO: (0) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 41.470526ms) Feb 12 13:21:23.080: INFO: (1) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 41.232707ms) Feb 12 13:21:23.080: INFO: (1) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 41.561818ms) Feb 12 13:21:23.090: INFO: (1) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test (200; 51.18929ms) Feb 12 13:21:23.090: INFO: (1) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 51.261313ms) Feb 12 13:21:23.090: INFO: (1) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 51.457454ms) Feb 12 13:21:23.097: INFO: (1) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 58.207111ms) Feb 12 13:21:23.097: INFO: (1) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 57.840966ms) Feb 12 13:21:23.097: INFO: (1) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 58.056955ms) Feb 12 13:21:23.097: INFO: (1) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 58.405681ms) Feb 12 13:21:23.097: INFO: (1) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 58.033682ms) Feb 12 13:21:23.097: INFO: (1) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 57.993516ms) Feb 12 13:21:23.097: INFO: (1) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 58.375341ms) Feb 12 13:21:23.117: INFO: (2) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 19.688549ms) Feb 12 13:21:23.117: INFO: (2) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 19.595629ms) Feb 12 13:21:23.118: INFO: (2) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 19.983311ms) Feb 12 13:21:23.118: INFO: (2) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 19.834641ms) Feb 12 13:21:23.118: INFO: (2) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 20.722569ms) Feb 12 13:21:23.118: INFO: (2) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 26.527834ms) Feb 12 13:21:23.125: INFO: (2) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 27.580139ms) Feb 12 13:21:23.125: INFO: (2) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 27.73865ms) Feb 12 13:21:23.125: INFO: (2) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 27.593908ms) Feb 12 13:21:23.126: INFO: (2) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 28.577447ms) Feb 12 13:21:23.143: INFO: (3) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 16.568448ms) Feb 12 13:21:23.143: INFO: (3) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 17.095156ms) Feb 12 13:21:23.146: INFO: (3) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 19.834571ms) Feb 12 13:21:23.146: INFO: (3) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 19.985109ms) Feb 12 13:21:23.146: INFO: (3) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 19.920678ms) Feb 12 13:21:23.146: INFO: (3) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 20.269128ms) Feb 12 13:21:23.146: INFO: (3) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 19.987024ms) Feb 12 13:21:23.146: INFO: (3) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 20.012919ms) Feb 12 13:21:23.146: INFO: (3) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 20.130113ms) Feb 12 13:21:23.148: INFO: (3) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 21.663098ms) Feb 12 13:21:23.149: INFO: (3) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 22.79416ms) Feb 12 13:21:23.149: INFO: (3) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 22.802703ms) Feb 12 13:21:23.149: INFO: (3) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 22.630257ms) Feb 12 13:21:23.149: INFO: (3) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 22.976595ms) Feb 12 13:21:23.150: INFO: (3) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 24.024632ms) Feb 12 13:21:23.163: INFO: (4) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 12.130389ms) Feb 12 13:21:23.163: INFO: (4) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 12.652403ms) Feb 12 13:21:23.163: INFO: (4) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 12.607966ms) Feb 12 13:21:23.164: INFO: (4) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 13.012625ms) Feb 12 13:21:23.164: INFO: (4) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 13.222711ms) Feb 12 13:21:23.164: INFO: (4) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 13.081505ms) Feb 12 13:21:23.164: INFO: (4) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 13.345492ms) Feb 12 13:21:23.166: INFO: (4) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 14.943604ms) Feb 12 13:21:23.166: INFO: (4) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 15.344819ms) Feb 12 13:21:23.168: INFO: (4) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 9.977051ms) Feb 12 13:21:23.180: INFO: (5) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 10.184007ms) Feb 12 13:21:23.180: INFO: (5) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 10.485131ms) Feb 12 13:21:23.181: INFO: (5) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 10.737991ms) Feb 12 13:21:23.181: INFO: (5) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 10.816837ms) Feb 12 13:21:23.181: INFO: (5) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 10.818813ms) Feb 12 13:21:23.182: INFO: (5) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 12.319796ms) Feb 12 13:21:23.183: INFO: (5) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 13.234461ms) Feb 12 13:21:23.183: INFO: (5) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 13.287135ms) Feb 12 13:21:23.183: INFO: (5) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test (200; 9.545554ms) Feb 12 13:21:23.196: INFO: (6) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 9.235663ms) Feb 12 13:21:23.198: INFO: (6) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: ... (200; 11.398223ms) Feb 12 13:21:23.198: INFO: (6) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 11.630056ms) Feb 12 13:21:23.200: INFO: (6) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 13.353221ms) Feb 12 13:21:23.200: INFO: (6) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 13.870862ms) Feb 12 13:21:23.201: INFO: (6) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 14.021789ms) Feb 12 13:21:23.201: INFO: (6) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 14.257559ms) Feb 12 13:21:23.201: INFO: (6) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 13.982431ms) Feb 12 13:21:23.201: INFO: (6) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 14.34601ms) Feb 12 13:21:23.202: INFO: (6) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 15.349349ms) Feb 12 13:21:23.211: INFO: (7) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 8.915105ms) Feb 12 13:21:23.211: INFO: (7) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 9.096251ms) Feb 12 13:21:23.212: INFO: (7) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 10.3939ms) Feb 12 13:21:23.214: INFO: (7) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 12.255376ms) Feb 12 13:21:23.215: INFO: (7) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 12.600245ms) Feb 12 13:21:23.215: INFO: (7) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 13.166803ms) Feb 12 13:21:23.215: INFO: (7) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: ... (200; 13.595272ms) Feb 12 13:21:23.216: INFO: (7) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 13.933676ms) Feb 12 13:21:23.219: INFO: (7) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 16.833315ms) Feb 12 13:21:23.219: INFO: (7) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 17.195839ms) Feb 12 13:21:23.221: INFO: (7) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 18.963226ms) Feb 12 13:21:23.222: INFO: (7) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 19.594124ms) Feb 12 13:21:23.222: INFO: (7) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 19.632762ms) Feb 12 13:21:23.222: INFO: (7) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 20.034521ms) Feb 12 13:21:23.235: INFO: (8) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 12.299012ms) Feb 12 13:21:23.238: INFO: (8) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 15.743016ms) Feb 12 13:21:23.239: INFO: (8) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 16.922122ms) Feb 12 13:21:23.241: INFO: (8) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 18.154857ms) Feb 12 13:21:23.241: INFO: (8) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 18.455563ms) Feb 12 13:21:23.241: INFO: (8) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: ... (200; 18.525844ms) Feb 12 13:21:23.241: INFO: (8) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 18.787352ms) Feb 12 13:21:23.244: INFO: (8) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 21.743917ms) Feb 12 13:21:23.245: INFO: (8) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 22.141452ms) Feb 12 13:21:23.245: INFO: (8) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 22.408841ms) Feb 12 13:21:23.246: INFO: (8) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 23.052776ms) Feb 12 13:21:23.247: INFO: (8) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 24.253624ms) Feb 12 13:21:23.247: INFO: (8) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 24.243926ms) Feb 12 13:21:23.247: INFO: (8) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 24.087563ms) Feb 12 13:21:23.247: INFO: (8) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 24.301597ms) Feb 12 13:21:23.267: INFO: (9) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 20.064652ms) Feb 12 13:21:23.267: INFO: (9) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 20.085128ms) Feb 12 13:21:23.268: INFO: (9) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 19.931298ms) Feb 12 13:21:23.268: INFO: (9) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 20.832647ms) Feb 12 13:21:23.269: INFO: (9) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 21.019532ms) Feb 12 13:21:23.269: INFO: (9) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 20.830717ms) Feb 12 13:21:23.270: INFO: (9) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 22.505309ms) Feb 12 13:21:23.270: INFO: (9) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 22.269462ms) Feb 12 13:21:23.271: INFO: (9) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 23.337456ms) Feb 12 13:21:23.271: INFO: (9) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 23.329189ms) Feb 12 13:21:23.271: INFO: (9) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 23.773332ms) Feb 12 13:21:23.271: INFO: (9) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 24.011385ms) Feb 12 13:21:23.271: INFO: (9) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 23.931949ms) Feb 12 13:21:23.272: INFO: (9) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 24.224549ms) Feb 12 13:21:23.272: INFO: (9) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 24.660504ms) Feb 12 13:21:23.272: INFO: (9) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test (200; 13.351532ms) Feb 12 13:21:23.288: INFO: (10) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 15.170195ms) Feb 12 13:21:23.289: INFO: (10) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 16.518039ms) Feb 12 13:21:23.290: INFO: (10) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 17.57662ms) Feb 12 13:21:23.290: INFO: (10) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 17.235788ms) Feb 12 13:21:23.291: INFO: (10) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 17.651159ms) Feb 12 13:21:23.291: INFO: (10) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 18.110922ms) Feb 12 13:21:23.292: INFO: (10) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 18.673746ms) Feb 12 13:21:23.292: INFO: (10) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 19.109903ms) Feb 12 13:21:23.292: INFO: (10) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 19.007639ms) Feb 12 13:21:23.292: INFO: (10) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test (200; 13.979902ms) Feb 12 13:21:23.309: INFO: (11) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 13.546141ms) Feb 12 13:21:23.309: INFO: (11) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 13.8313ms) Feb 12 13:21:23.314: INFO: (11) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 18.956171ms) Feb 12 13:21:23.315: INFO: (11) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 19.168285ms) Feb 12 13:21:23.315: INFO: (11) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 19.400994ms) Feb 12 13:21:23.315: INFO: (11) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 19.847241ms) Feb 12 13:21:23.315: INFO: (11) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 19.516438ms) Feb 12 13:21:23.315: INFO: (11) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 19.590845ms) Feb 12 13:21:23.316: INFO: (11) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 20.652449ms) Feb 12 13:21:23.316: INFO: (11) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 20.865759ms) Feb 12 13:21:23.317: INFO: (11) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 21.547579ms) Feb 12 13:21:23.318: INFO: (11) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 21.996981ms) Feb 12 13:21:23.332: INFO: (12) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 14.344227ms) Feb 12 13:21:23.333: INFO: (12) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 14.852734ms) Feb 12 13:21:23.333: INFO: (12) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 14.880217ms) Feb 12 13:21:23.334: INFO: (12) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 16.12148ms) Feb 12 13:21:23.334: INFO: (12) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test (200; 16.471167ms) Feb 12 13:21:23.336: INFO: (12) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 17.895805ms) Feb 12 13:21:23.336: INFO: (12) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 18.146696ms) Feb 12 13:21:23.339: INFO: (12) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 20.966505ms) Feb 12 13:21:23.339: INFO: (12) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 21.001656ms) Feb 12 13:21:23.340: INFO: (12) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 21.612141ms) Feb 12 13:21:23.340: INFO: (12) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 22.431171ms) Feb 12 13:21:23.341: INFO: (12) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 22.470484ms) Feb 12 13:21:23.344: INFO: (12) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 26.136588ms) Feb 12 13:21:23.365: INFO: (13) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 19.488384ms) Feb 12 13:21:23.365: INFO: (13) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 19.70173ms) Feb 12 13:21:23.366: INFO: (13) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 20.530135ms) Feb 12 13:21:23.366: INFO: (13) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 21.524262ms) Feb 12 13:21:23.366: INFO: (13) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 21.881299ms) Feb 12 13:21:23.368: INFO: (13) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 22.669281ms) Feb 12 13:21:23.368: INFO: (13) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 22.745586ms) Feb 12 13:21:23.369: INFO: (13) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 24.248851ms) Feb 12 13:21:23.370: INFO: (13) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 25.04042ms) Feb 12 13:21:23.370: INFO: (13) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 25.665253ms) Feb 12 13:21:23.370: INFO: (13) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test (200; 25.675233ms) Feb 12 13:21:23.370: INFO: (13) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 25.59017ms) Feb 12 13:21:23.370: INFO: (13) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 25.400596ms) Feb 12 13:21:23.370: INFO: (13) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 25.469114ms) Feb 12 13:21:23.380: INFO: (14) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 9.735146ms) Feb 12 13:21:23.380: INFO: (14) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 9.525161ms) Feb 12 13:21:23.381: INFO: (14) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 9.960071ms) Feb 12 13:21:23.381: INFO: (14) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 10.333896ms) Feb 12 13:21:23.382: INFO: (14) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 10.726485ms) Feb 12 13:21:23.382: INFO: (14) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 11.016053ms) Feb 12 13:21:23.382: INFO: (14) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 11.221689ms) Feb 12 13:21:23.382: INFO: (14) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 11.187611ms) Feb 12 13:21:23.384: INFO: (14) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 12.797384ms) Feb 12 13:21:23.385: INFO: (14) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 13.711374ms) Feb 12 13:21:23.385: INFO: (14) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 14.052491ms) Feb 12 13:21:23.388: INFO: (14) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 16.890487ms) Feb 12 13:21:23.388: INFO: (14) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 16.839215ms) Feb 12 13:21:23.388: INFO: (14) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 17.267706ms) Feb 12 13:21:23.388: INFO: (14) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 17.266544ms) Feb 12 13:21:23.399: INFO: (15) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 10.809834ms) Feb 12 13:21:23.400: INFO: (15) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 11.459416ms) Feb 12 13:21:23.401: INFO: (15) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 12.191698ms) Feb 12 13:21:23.401: INFO: (15) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 12.16259ms) Feb 12 13:21:23.401: INFO: (15) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 13.047804ms) Feb 12 13:21:23.402: INFO: (15) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 13.327138ms) Feb 12 13:21:23.402: INFO: (15) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 13.198804ms) Feb 12 13:21:23.402: INFO: (15) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 16.953861ms) Feb 12 13:21:23.407: INFO: (15) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 18.747787ms) Feb 12 13:21:23.407: INFO: (15) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 19.055153ms) Feb 12 13:21:23.407: INFO: (15) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 18.922822ms) Feb 12 13:21:23.411: INFO: (15) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 22.428887ms) Feb 12 13:21:23.411: INFO: (15) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 22.533258ms) Feb 12 13:21:23.411: INFO: (15) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 22.548274ms) Feb 12 13:21:23.411: INFO: (15) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 22.569119ms) Feb 12 13:21:23.430: INFO: (16) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 18.654526ms) Feb 12 13:21:23.431: INFO: (16) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 19.118942ms) Feb 12 13:21:23.431: INFO: (16) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 19.29589ms) Feb 12 13:21:23.431: INFO: (16) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 19.12447ms) Feb 12 13:21:23.432: INFO: (16) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 20.137537ms) Feb 12 13:21:23.432: INFO: (16) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 20.514447ms) Feb 12 13:21:23.434: INFO: (16) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 22.70538ms) Feb 12 13:21:23.443: INFO: (16) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 35.071419ms) Feb 12 13:21:23.447: INFO: (16) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 35.529425ms) Feb 12 13:21:23.470: INFO: (17) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname1/proxy/: foo (200; 22.994159ms) Feb 12 13:21:23.472: INFO: (17) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 24.23728ms) Feb 12 13:21:23.472: INFO: (17) /api/v1/namespaces/proxy-7982/services/http:proxy-service-7fx4t:portname2/proxy/: bar (200; 24.425702ms) Feb 12 13:21:23.472: INFO: (17) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 25.147243ms) Feb 12 13:21:23.472: INFO: (17) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 24.835234ms) Feb 12 13:21:23.472: INFO: (17) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 24.959793ms) Feb 12 13:21:23.473: INFO: (17) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 26.016409ms) Feb 12 13:21:23.473: INFO: (17) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 27.332396ms) Feb 12 13:21:23.474: INFO: (17) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 27.173731ms) Feb 12 13:21:23.474: INFO: (17) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 27.242632ms) Feb 12 13:21:23.474: INFO: (17) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 27.149875ms) Feb 12 13:21:23.477: INFO: (17) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 29.313135ms) Feb 12 13:21:23.477: INFO: (17) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname2/proxy/: bar (200; 29.84559ms) Feb 12 13:21:23.478: INFO: (17) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname1/proxy/: tls baz (200; 30.821235ms) Feb 12 13:21:23.478: INFO: (17) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 31.14268ms) Feb 12 13:21:23.509: INFO: (18) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj/proxy/: test (200; 31.053026ms) Feb 12 13:21:23.510: INFO: (18) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 31.83658ms) Feb 12 13:21:23.511: INFO: (18) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 32.038962ms) Feb 12 13:21:23.511: INFO: (18) /api/v1/namespaces/proxy-7982/services/https:proxy-service-7fx4t:tlsportname2/proxy/: tls qux (200; 32.673962ms) Feb 12 13:21:23.511: INFO: (18) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:1080/proxy/: test<... (200; 33.071601ms) Feb 12 13:21:23.511: INFO: (18) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 32.939108ms) Feb 12 13:21:23.512: INFO: (18) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 33.084952ms) Feb 12 13:21:23.512: INFO: (18) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 33.385858ms) Feb 12 13:21:23.512: INFO: (18) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test<... (200; 17.769879ms) Feb 12 13:21:23.541: INFO: (19) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:443/proxy/: test (200; 24.046258ms) Feb 12 13:21:23.544: INFO: (19) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:460/proxy/: tls baz (200; 25.009011ms) Feb 12 13:21:23.545: INFO: (19) /api/v1/namespaces/proxy-7982/pods/https:proxy-service-7fx4t-4mlbj:462/proxy/: tls qux (200; 25.120219ms) Feb 12 13:21:23.545: INFO: (19) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:1080/proxy/: ... (200; 25.623354ms) Feb 12 13:21:23.546: INFO: (19) /api/v1/namespaces/proxy-7982/services/proxy-service-7fx4t:portname1/proxy/: foo (200; 26.660011ms) Feb 12 13:21:23.546: INFO: (19) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 27.152368ms) Feb 12 13:21:23.547: INFO: (19) /api/v1/namespaces/proxy-7982/pods/proxy-service-7fx4t-4mlbj:162/proxy/: bar (200; 27.294536ms) Feb 12 13:21:23.547: INFO: (19) /api/v1/namespaces/proxy-7982/pods/http:proxy-service-7fx4t-4mlbj:160/proxy/: foo (200; 27.570337ms) STEP: deleting ReplicationController proxy-service-7fx4t in namespace proxy-7982, will wait for the garbage collector to delete the pods Feb 12 13:21:23.615: INFO: Deleting ReplicationController proxy-service-7fx4t took: 10.60275ms Feb 12 13:21:23.916: INFO: Terminating ReplicationController proxy-service-7fx4t pods took: 301.132236ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:21:36.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7982" for this suite. Feb 12 13:21:42.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:21:42.777: INFO: namespace proxy-7982 deletion completed in 6.147856311s • [SLOW TEST:34.097 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:21:42.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 12 13:21:42.923: INFO: Waiting up to 5m0s for pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c" in namespace "downward-api-558" to be "success or failure" Feb 12 13:21:42.944: INFO: Pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c": Phase="Pending", Reason="", readiness=false. Elapsed: 21.211981ms Feb 12 13:21:44.952: INFO: Pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029370128s Feb 12 13:21:46.961: INFO: Pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038202206s Feb 12 13:21:48.970: INFO: Pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047236297s Feb 12 13:21:50.976: INFO: Pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052789519s Feb 12 13:21:52.984: INFO: Pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061471275s STEP: Saw pod success Feb 12 13:21:52.985: INFO: Pod "downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c" satisfied condition "success or failure" Feb 12 13:21:52.988: INFO: Trying to get logs from node iruya-node pod downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c container dapi-container: STEP: delete the pod Feb 12 13:21:53.515: INFO: Waiting for pod downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c to disappear Feb 12 13:21:53.532: INFO: Pod downward-api-ed48897a-234f-43f4-9362-0e0d5c94d84c no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:21:53.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-558" for this suite. Feb 12 13:21:59.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:21:59.954: INFO: namespace downward-api-558 deletion completed in 6.400635991s • [SLOW TEST:17.176 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:21:59.954: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 12 13:22:08.691: INFO: Successfully updated pod "labelsupdate759b199d-f4f9-4e8a-a943-0b80a6c4e3d8" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:22:12.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2060" for this suite. Feb 12 13:22:34.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:22:35.003: INFO: namespace downward-api-2060 deletion completed in 22.18009327s • [SLOW TEST:35.049 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:22:35.003: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 12 13:22:35.101: INFO: Waiting up to 5m0s for pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0" in namespace "downward-api-4916" to be "success or failure" Feb 12 13:22:35.107: INFO: Pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.781359ms Feb 12 13:22:37.118: INFO: Pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017591931s Feb 12 13:22:39.129: INFO: Pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02820845s Feb 12 13:22:41.140: INFO: Pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039646332s Feb 12 13:22:43.148: INFO: Pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047324493s Feb 12 13:22:45.165: INFO: Pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.064201717s STEP: Saw pod success Feb 12 13:22:45.165: INFO: Pod "downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0" satisfied condition "success or failure" Feb 12 13:22:45.172: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0 container client-container: STEP: delete the pod Feb 12 13:22:45.351: INFO: Waiting for pod downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0 to disappear Feb 12 13:22:45.390: INFO: Pod downwardapi-volume-675de254-4bec-4991-aa71-6b9551d085e0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:22:45.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4916" for this suite. Feb 12 13:22:51.586: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:22:51.725: INFO: namespace downward-api-4916 deletion completed in 6.327443869s • [SLOW TEST:16.722 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:22:51.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 12 13:23:12.000: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 13:23:12.038: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 13:23:14.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 13:23:14.045: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 13:23:16.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 13:23:16.045: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 13:23:18.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 13:23:18.046: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 13:23:20.039: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 13:23:20.051: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 13:23:22.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 13:23:22.047: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 13:23:24.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 13:23:24.049: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 13:23:26.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 13:23:26.043: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 13:23:28.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 13:23:28.046: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 13:23:30.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 13:23:30.049: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 13:23:32.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 13:23:32.045: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 13:23:34.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 13:23:34.059: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 13:23:36.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 13:23:36.049: INFO: Pod pod-with-prestop-exec-hook still exists Feb 12 13:23:38.038: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Feb 12 13:23:38.046: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:23:38.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7831" for this suite. Feb 12 13:24:00.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:24:00.295: INFO: namespace container-lifecycle-hook-7831 deletion completed in 22.215434591s • [SLOW TEST:68.569 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:24:00.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Feb 12 13:24:00.425: INFO: Waiting up to 5m0s for pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67" in namespace "containers-652" to be "success or failure" Feb 12 13:24:00.448: INFO: Pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67": Phase="Pending", Reason="", readiness=false. Elapsed: 22.425099ms Feb 12 13:24:02.456: INFO: Pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030859261s Feb 12 13:24:04.467: INFO: Pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0413931s Feb 12 13:24:06.477: INFO: Pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051525813s Feb 12 13:24:08.492: INFO: Pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066184723s Feb 12 13:24:10.505: INFO: Pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079625772s STEP: Saw pod success Feb 12 13:24:10.505: INFO: Pod "client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67" satisfied condition "success or failure" Feb 12 13:24:10.516: INFO: Trying to get logs from node iruya-node pod client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67 container test-container: STEP: delete the pod Feb 12 13:24:10.586: INFO: Waiting for pod client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67 to disappear Feb 12 13:24:10.618: INFO: Pod client-containers-8cc9f839-d1b7-44a0-ba1b-e8407a80bc67 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:24:10.618: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-652" for this suite. Feb 12 13:24:16.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:24:16.757: INFO: namespace containers-652 deletion completed in 6.13085404s • [SLOW TEST:16.461 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:24:16.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3552 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-3552 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3552 Feb 12 13:24:16.897: INFO: Found 0 stateful pods, waiting for 1 Feb 12 13:24:26.912: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Feb 12 13:24:26.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 12 13:24:29.899: INFO: stderr: "I0212 13:24:29.382714 1336 log.go:172] (0xc000b86420) (0xc0004268c0) Create stream\nI0212 13:24:29.382951 1336 log.go:172] (0xc000b86420) (0xc0004268c0) Stream added, broadcasting: 1\nI0212 13:24:29.391835 1336 log.go:172] (0xc000b86420) Reply frame received for 1\nI0212 13:24:29.391931 1336 log.go:172] (0xc000b86420) (0xc0007140a0) Create stream\nI0212 13:24:29.391949 1336 log.go:172] (0xc000b86420) (0xc0007140a0) Stream added, broadcasting: 3\nI0212 13:24:29.394183 1336 log.go:172] (0xc000b86420) Reply frame received for 3\nI0212 13:24:29.394299 1336 log.go:172] (0xc000b86420) (0xc000714140) Create stream\nI0212 13:24:29.394317 1336 log.go:172] (0xc000b86420) (0xc000714140) Stream added, broadcasting: 5\nI0212 13:24:29.398010 1336 log.go:172] (0xc000b86420) Reply frame received for 5\nI0212 13:24:29.586842 1336 log.go:172] (0xc000b86420) Data frame received for 5\nI0212 13:24:29.586923 1336 log.go:172] (0xc000714140) (5) Data frame handling\nI0212 13:24:29.586958 1336 log.go:172] (0xc000714140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 13:24:29.697916 1336 log.go:172] (0xc000b86420) Data frame received for 3\nI0212 13:24:29.698421 1336 log.go:172] (0xc0007140a0) (3) Data frame handling\nI0212 13:24:29.698541 1336 log.go:172] (0xc0007140a0) (3) Data frame sent\nI0212 13:24:29.874832 1336 log.go:172] (0xc000b86420) (0xc0007140a0) Stream removed, broadcasting: 3\nI0212 13:24:29.875303 1336 log.go:172] (0xc000b86420) Data frame received for 1\nI0212 13:24:29.875852 1336 log.go:172] (0xc000b86420) (0xc000714140) Stream removed, broadcasting: 5\nI0212 13:24:29.876543 1336 log.go:172] (0xc0004268c0) (1) Data frame handling\nI0212 13:24:29.876917 1336 log.go:172] (0xc0004268c0) (1) Data frame sent\nI0212 13:24:29.876972 1336 log.go:172] (0xc000b86420) (0xc0004268c0) Stream removed, broadcasting: 1\nI0212 13:24:29.877074 1336 log.go:172] (0xc000b86420) Go away received\nI0212 13:24:29.878734 1336 log.go:172] (0xc000b86420) (0xc0004268c0) Stream removed, broadcasting: 1\nI0212 13:24:29.878788 1336 log.go:172] (0xc000b86420) (0xc0007140a0) Stream removed, broadcasting: 3\nI0212 13:24:29.878802 1336 log.go:172] (0xc000b86420) (0xc000714140) Stream removed, broadcasting: 5\n" Feb 12 13:24:29.899: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 12 13:24:29.899: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 12 13:24:29.912: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 12 13:24:39.922: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 12 13:24:39.922: INFO: Waiting for statefulset status.replicas updated to 0 Feb 12 13:24:39.956: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999467s Feb 12 13:24:40.967: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.983518652s Feb 12 13:24:41.975: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.971996939s Feb 12 13:24:42.984: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.963680321s Feb 12 13:24:43.993: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.955369968s Feb 12 13:24:45.002: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.946329435s Feb 12 13:24:46.032: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.937413167s Feb 12 13:24:47.074: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.907020525s Feb 12 13:24:48.079: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.86539977s Feb 12 13:24:49.145: INFO: Verifying statefulset ss doesn't scale past 1 for another 860.012853ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3552 Feb 12 13:24:50.153: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 13:24:51.081: INFO: stderr: "I0212 13:24:50.449022 1369 log.go:172] (0xc000a3a2c0) (0xc0008e85a0) Create stream\nI0212 13:24:50.449555 1369 log.go:172] (0xc000a3a2c0) (0xc0008e85a0) Stream added, broadcasting: 1\nI0212 13:24:50.457031 1369 log.go:172] (0xc000a3a2c0) Reply frame received for 1\nI0212 13:24:50.457089 1369 log.go:172] (0xc000a3a2c0) (0xc000970000) Create stream\nI0212 13:24:50.457099 1369 log.go:172] (0xc000a3a2c0) (0xc000970000) Stream added, broadcasting: 3\nI0212 13:24:50.459887 1369 log.go:172] (0xc000a3a2c0) Reply frame received for 3\nI0212 13:24:50.460091 1369 log.go:172] (0xc000a3a2c0) (0xc0009700a0) Create stream\nI0212 13:24:50.460102 1369 log.go:172] (0xc000a3a2c0) (0xc0009700a0) Stream added, broadcasting: 5\nI0212 13:24:50.464075 1369 log.go:172] (0xc000a3a2c0) Reply frame received for 5\nI0212 13:24:50.870358 1369 log.go:172] (0xc000a3a2c0) Data frame received for 3\nI0212 13:24:50.871115 1369 log.go:172] (0xc000a3a2c0) Data frame received for 5\nI0212 13:24:50.871283 1369 log.go:172] (0xc0009700a0) (5) Data frame handling\nI0212 13:24:50.871738 1369 log.go:172] (0xc0009700a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0212 13:24:50.872243 1369 log.go:172] (0xc000970000) (3) Data frame handling\nI0212 13:24:50.873267 1369 log.go:172] (0xc000970000) (3) Data frame sent\nI0212 13:24:51.064346 1369 log.go:172] (0xc000a3a2c0) Data frame received for 1\nI0212 13:24:51.064497 1369 log.go:172] (0xc000a3a2c0) (0xc0009700a0) Stream removed, broadcasting: 5\nI0212 13:24:51.064575 1369 log.go:172] (0xc0008e85a0) (1) Data frame handling\nI0212 13:24:51.064602 1369 log.go:172] (0xc0008e85a0) (1) Data frame sent\nI0212 13:24:51.064628 1369 log.go:172] (0xc000a3a2c0) (0xc000970000) Stream removed, broadcasting: 3\nI0212 13:24:51.064670 1369 log.go:172] (0xc000a3a2c0) (0xc0008e85a0) Stream removed, broadcasting: 1\nI0212 13:24:51.064691 1369 log.go:172] (0xc000a3a2c0) Go away received\nI0212 13:24:51.065764 1369 log.go:172] (0xc000a3a2c0) (0xc0008e85a0) Stream removed, broadcasting: 1\nI0212 13:24:51.065785 1369 log.go:172] (0xc000a3a2c0) (0xc000970000) Stream removed, broadcasting: 3\nI0212 13:24:51.065795 1369 log.go:172] (0xc000a3a2c0) (0xc0009700a0) Stream removed, broadcasting: 5\n" Feb 12 13:24:51.082: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 12 13:24:51.082: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 12 13:24:51.128: INFO: Found 1 stateful pods, waiting for 3 Feb 12 13:25:01.140: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 12 13:25:01.140: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 12 13:25:01.140: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false Feb 12 13:25:11.140: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 12 13:25:11.140: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 12 13:25:11.140: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Feb 12 13:25:11.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 12 13:25:11.685: INFO: stderr: "I0212 13:25:11.392015 1390 log.go:172] (0xc00012a8f0) (0xc0005c0b40) Create stream\nI0212 13:25:11.392285 1390 log.go:172] (0xc00012a8f0) (0xc0005c0b40) Stream added, broadcasting: 1\nI0212 13:25:11.397785 1390 log.go:172] (0xc00012a8f0) Reply frame received for 1\nI0212 13:25:11.397818 1390 log.go:172] (0xc00012a8f0) (0xc0005c0be0) Create stream\nI0212 13:25:11.397825 1390 log.go:172] (0xc00012a8f0) (0xc0005c0be0) Stream added, broadcasting: 3\nI0212 13:25:11.399631 1390 log.go:172] (0xc00012a8f0) Reply frame received for 3\nI0212 13:25:11.401798 1390 log.go:172] (0xc00012a8f0) (0xc000768000) Create stream\nI0212 13:25:11.402083 1390 log.go:172] (0xc00012a8f0) (0xc000768000) Stream added, broadcasting: 5\nI0212 13:25:11.408082 1390 log.go:172] (0xc00012a8f0) Reply frame received for 5\nI0212 13:25:11.545670 1390 log.go:172] (0xc00012a8f0) Data frame received for 5\nI0212 13:25:11.545743 1390 log.go:172] (0xc000768000) (5) Data frame handling\nI0212 13:25:11.545769 1390 log.go:172] (0xc000768000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 13:25:11.556162 1390 log.go:172] (0xc00012a8f0) Data frame received for 3\nI0212 13:25:11.556177 1390 log.go:172] (0xc0005c0be0) (3) Data frame handling\nI0212 13:25:11.556192 1390 log.go:172] (0xc0005c0be0) (3) Data frame sent\nI0212 13:25:11.671736 1390 log.go:172] (0xc00012a8f0) Data frame received for 1\nI0212 13:25:11.671784 1390 log.go:172] (0xc0005c0b40) (1) Data frame handling\nI0212 13:25:11.671814 1390 log.go:172] (0xc0005c0b40) (1) Data frame sent\nI0212 13:25:11.672071 1390 log.go:172] (0xc00012a8f0) (0xc0005c0b40) Stream removed, broadcasting: 1\nI0212 13:25:11.673116 1390 log.go:172] (0xc00012a8f0) (0xc0005c0be0) Stream removed, broadcasting: 3\nI0212 13:25:11.673197 1390 log.go:172] (0xc00012a8f0) (0xc000768000) Stream removed, broadcasting: 5\nI0212 13:25:11.673239 1390 log.go:172] (0xc00012a8f0) (0xc0005c0b40) Stream removed, broadcasting: 1\nI0212 13:25:11.673247 1390 log.go:172] (0xc00012a8f0) (0xc0005c0be0) Stream removed, broadcasting: 3\nI0212 13:25:11.673252 1390 log.go:172] (0xc00012a8f0) (0xc000768000) Stream removed, broadcasting: 5\n" Feb 12 13:25:11.685: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 12 13:25:11.685: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 12 13:25:11.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 12 13:25:12.253: INFO: stderr: "I0212 13:25:11.878301 1414 log.go:172] (0xc000830370) (0xc0008686e0) Create stream\nI0212 13:25:11.878829 1414 log.go:172] (0xc000830370) (0xc0008686e0) Stream added, broadcasting: 1\nI0212 13:25:11.885733 1414 log.go:172] (0xc000830370) Reply frame received for 1\nI0212 13:25:11.885778 1414 log.go:172] (0xc000830370) (0xc00065a1e0) Create stream\nI0212 13:25:11.885792 1414 log.go:172] (0xc000830370) (0xc00065a1e0) Stream added, broadcasting: 3\nI0212 13:25:11.886843 1414 log.go:172] (0xc000830370) Reply frame received for 3\nI0212 13:25:11.886875 1414 log.go:172] (0xc000830370) (0xc00065a280) Create stream\nI0212 13:25:11.886887 1414 log.go:172] (0xc000830370) (0xc00065a280) Stream added, broadcasting: 5\nI0212 13:25:11.887658 1414 log.go:172] (0xc000830370) Reply frame received for 5\nI0212 13:25:12.012563 1414 log.go:172] (0xc000830370) Data frame received for 5\nI0212 13:25:12.012649 1414 log.go:172] (0xc00065a280) (5) Data frame handling\nI0212 13:25:12.012666 1414 log.go:172] (0xc00065a280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 13:25:12.166468 1414 log.go:172] (0xc000830370) Data frame received for 3\nI0212 13:25:12.166623 1414 log.go:172] (0xc00065a1e0) (3) Data frame handling\nI0212 13:25:12.166665 1414 log.go:172] (0xc00065a1e0) (3) Data frame sent\nI0212 13:25:12.243804 1414 log.go:172] (0xc000830370) (0xc00065a280) Stream removed, broadcasting: 5\nI0212 13:25:12.243845 1414 log.go:172] (0xc000830370) Data frame received for 1\nI0212 13:25:12.243876 1414 log.go:172] (0xc0008686e0) (1) Data frame handling\nI0212 13:25:12.243889 1414 log.go:172] (0xc000830370) (0xc00065a1e0) Stream removed, broadcasting: 3\nI0212 13:25:12.243970 1414 log.go:172] (0xc0008686e0) (1) Data frame sent\nI0212 13:25:12.243994 1414 log.go:172] (0xc000830370) (0xc0008686e0) Stream removed, broadcasting: 1\nI0212 13:25:12.244081 1414 log.go:172] (0xc000830370) Go away received\nI0212 13:25:12.244792 1414 log.go:172] (0xc000830370) (0xc0008686e0) Stream removed, broadcasting: 1\nI0212 13:25:12.244804 1414 log.go:172] (0xc000830370) (0xc00065a1e0) Stream removed, broadcasting: 3\nI0212 13:25:12.244812 1414 log.go:172] (0xc000830370) (0xc00065a280) Stream removed, broadcasting: 5\n" Feb 12 13:25:12.253: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 12 13:25:12.253: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 12 13:25:12.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 12 13:25:12.863: INFO: stderr: "I0212 13:25:12.448544 1434 log.go:172] (0xc0009b6420) (0xc00044c6e0) Create stream\nI0212 13:25:12.448933 1434 log.go:172] (0xc0009b6420) (0xc00044c6e0) Stream added, broadcasting: 1\nI0212 13:25:12.457428 1434 log.go:172] (0xc0009b6420) Reply frame received for 1\nI0212 13:25:12.461260 1434 log.go:172] (0xc0009b6420) (0xc00044c780) Create stream\nI0212 13:25:12.461688 1434 log.go:172] (0xc0009b6420) (0xc00044c780) Stream added, broadcasting: 3\nI0212 13:25:12.471140 1434 log.go:172] (0xc0009b6420) Reply frame received for 3\nI0212 13:25:12.471227 1434 log.go:172] (0xc0009b6420) (0xc00044c000) Create stream\nI0212 13:25:12.471241 1434 log.go:172] (0xc0009b6420) (0xc00044c000) Stream added, broadcasting: 5\nI0212 13:25:12.473479 1434 log.go:172] (0xc0009b6420) Reply frame received for 5\nI0212 13:25:12.672297 1434 log.go:172] (0xc0009b6420) Data frame received for 5\nI0212 13:25:12.672437 1434 log.go:172] (0xc00044c000) (5) Data frame handling\nI0212 13:25:12.672530 1434 log.go:172] (0xc00044c000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 13:25:12.720936 1434 log.go:172] (0xc0009b6420) Data frame received for 3\nI0212 13:25:12.721029 1434 log.go:172] (0xc00044c780) (3) Data frame handling\nI0212 13:25:12.721056 1434 log.go:172] (0xc00044c780) (3) Data frame sent\nI0212 13:25:12.845759 1434 log.go:172] (0xc0009b6420) Data frame received for 1\nI0212 13:25:12.845878 1434 log.go:172] (0xc00044c6e0) (1) Data frame handling\nI0212 13:25:12.845901 1434 log.go:172] (0xc00044c6e0) (1) Data frame sent\nI0212 13:25:12.845941 1434 log.go:172] (0xc0009b6420) (0xc00044c6e0) Stream removed, broadcasting: 1\nI0212 13:25:12.846388 1434 log.go:172] (0xc0009b6420) (0xc00044c780) Stream removed, broadcasting: 3\nI0212 13:25:12.846699 1434 log.go:172] (0xc0009b6420) (0xc00044c000) Stream removed, broadcasting: 5\nI0212 13:25:12.846768 1434 log.go:172] (0xc0009b6420) Go away received\nI0212 13:25:12.848192 1434 log.go:172] (0xc0009b6420) (0xc00044c6e0) Stream removed, broadcasting: 1\nI0212 13:25:12.848231 1434 log.go:172] (0xc0009b6420) (0xc00044c780) Stream removed, broadcasting: 3\nI0212 13:25:12.848245 1434 log.go:172] (0xc0009b6420) (0xc00044c000) Stream removed, broadcasting: 5\n" Feb 12 13:25:12.863: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 12 13:25:12.863: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 12 13:25:12.863: INFO: Waiting for statefulset status.replicas updated to 0 Feb 12 13:25:12.871: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Feb 12 13:25:22.898: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 12 13:25:22.898: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 12 13:25:22.898: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 12 13:25:22.925: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999316s Feb 12 13:25:23.942: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.987230423s Feb 12 13:25:24.960: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.970458014s Feb 12 13:25:26.072: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.952091315s Feb 12 13:25:27.084: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.840322114s Feb 12 13:25:28.092: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.827883457s Feb 12 13:25:29.104: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.82055514s Feb 12 13:25:30.116: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.808402549s Feb 12 13:25:31.131: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.796339301s Feb 12 13:25:32.138: INFO: Verifying statefulset ss doesn't scale past 3 for another 780.892907ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3552 Feb 12 13:25:33.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 13:25:33.817: INFO: stderr: "I0212 13:25:33.423943 1454 log.go:172] (0xc0009e4420) (0xc0003006e0) Create stream\nI0212 13:25:33.424201 1454 log.go:172] (0xc0009e4420) (0xc0003006e0) Stream added, broadcasting: 1\nI0212 13:25:33.431849 1454 log.go:172] (0xc0009e4420) Reply frame received for 1\nI0212 13:25:33.431911 1454 log.go:172] (0xc0009e4420) (0xc000814000) Create stream\nI0212 13:25:33.431920 1454 log.go:172] (0xc0009e4420) (0xc000814000) Stream added, broadcasting: 3\nI0212 13:25:33.435531 1454 log.go:172] (0xc0009e4420) Reply frame received for 3\nI0212 13:25:33.435667 1454 log.go:172] (0xc0009e4420) (0xc000300780) Create stream\nI0212 13:25:33.435697 1454 log.go:172] (0xc0009e4420) (0xc000300780) Stream added, broadcasting: 5\nI0212 13:25:33.439232 1454 log.go:172] (0xc0009e4420) Reply frame received for 5\nI0212 13:25:33.595018 1454 log.go:172] (0xc0009e4420) Data frame received for 5\nI0212 13:25:33.595218 1454 log.go:172] (0xc000300780) (5) Data frame handling\nI0212 13:25:33.595293 1454 log.go:172] (0xc000300780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0212 13:25:33.597068 1454 log.go:172] (0xc0009e4420) Data frame received for 3\nI0212 13:25:33.597092 1454 log.go:172] (0xc000814000) (3) Data frame handling\nI0212 13:25:33.597129 1454 log.go:172] (0xc000814000) (3) Data frame sent\nI0212 13:25:33.804067 1454 log.go:172] (0xc0009e4420) Data frame received for 1\nI0212 13:25:33.804434 1454 log.go:172] (0xc0009e4420) (0xc000814000) Stream removed, broadcasting: 3\nI0212 13:25:33.804617 1454 log.go:172] (0xc0003006e0) (1) Data frame handling\nI0212 13:25:33.804690 1454 log.go:172] (0xc0003006e0) (1) Data frame sent\nI0212 13:25:33.805042 1454 log.go:172] (0xc0009e4420) (0xc000300780) Stream removed, broadcasting: 5\nI0212 13:25:33.805455 1454 log.go:172] (0xc0009e4420) (0xc0003006e0) Stream removed, broadcasting: 1\nI0212 13:25:33.805524 1454 log.go:172] (0xc0009e4420) Go away received\nI0212 13:25:33.807418 1454 log.go:172] (0xc0009e4420) (0xc0003006e0) Stream removed, broadcasting: 1\nI0212 13:25:33.807449 1454 log.go:172] (0xc0009e4420) (0xc000814000) Stream removed, broadcasting: 3\nI0212 13:25:33.807489 1454 log.go:172] (0xc0009e4420) (0xc000300780) Stream removed, broadcasting: 5\n" Feb 12 13:25:33.817: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 12 13:25:33.818: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 12 13:25:33.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 13:25:34.385: INFO: stderr: "I0212 13:25:34.121071 1474 log.go:172] (0xc000896f20) (0xc00091ed20) Create stream\nI0212 13:25:34.121773 1474 log.go:172] (0xc000896f20) (0xc00091ed20) Stream added, broadcasting: 1\nI0212 13:25:34.142040 1474 log.go:172] (0xc000896f20) Reply frame received for 1\nI0212 13:25:34.142482 1474 log.go:172] (0xc000896f20) (0xc00091e000) Create stream\nI0212 13:25:34.142527 1474 log.go:172] (0xc000896f20) (0xc00091e000) Stream added, broadcasting: 3\nI0212 13:25:34.144080 1474 log.go:172] (0xc000896f20) Reply frame received for 3\nI0212 13:25:34.144182 1474 log.go:172] (0xc000896f20) (0xc000864000) Create stream\nI0212 13:25:34.144222 1474 log.go:172] (0xc000896f20) (0xc000864000) Stream added, broadcasting: 5\nI0212 13:25:34.146763 1474 log.go:172] (0xc000896f20) Reply frame received for 5\nI0212 13:25:34.267838 1474 log.go:172] (0xc000896f20) Data frame received for 3\nI0212 13:25:34.268279 1474 log.go:172] (0xc00091e000) (3) Data frame handling\nI0212 13:25:34.268388 1474 log.go:172] (0xc00091e000) (3) Data frame sent\nI0212 13:25:34.268515 1474 log.go:172] (0xc000896f20) Data frame received for 5\nI0212 13:25:34.268575 1474 log.go:172] (0xc000864000) (5) Data frame handling\nI0212 13:25:34.268629 1474 log.go:172] (0xc000864000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0212 13:25:34.377110 1474 log.go:172] (0xc000896f20) Data frame received for 1\nI0212 13:25:34.377331 1474 log.go:172] (0xc000896f20) (0xc000864000) Stream removed, broadcasting: 5\nI0212 13:25:34.377420 1474 log.go:172] (0xc00091ed20) (1) Data frame handling\nI0212 13:25:34.377469 1474 log.go:172] (0xc00091ed20) (1) Data frame sent\nI0212 13:25:34.377554 1474 log.go:172] (0xc000896f20) (0xc00091e000) Stream removed, broadcasting: 3\nI0212 13:25:34.377579 1474 log.go:172] (0xc000896f20) (0xc00091ed20) Stream removed, broadcasting: 1\nI0212 13:25:34.377597 1474 log.go:172] (0xc000896f20) Go away received\nI0212 13:25:34.379040 1474 log.go:172] (0xc000896f20) (0xc00091ed20) Stream removed, broadcasting: 1\nI0212 13:25:34.379058 1474 log.go:172] (0xc000896f20) (0xc00091e000) Stream removed, broadcasting: 3\nI0212 13:25:34.379065 1474 log.go:172] (0xc000896f20) (0xc000864000) Stream removed, broadcasting: 5\n" Feb 12 13:25:34.385: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 12 13:25:34.386: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 12 13:25:34.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3552 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 13:25:34.871: INFO: stderr: "I0212 13:25:34.568628 1490 log.go:172] (0xc000104dc0) (0xc0005e4780) Create stream\nI0212 13:25:34.569032 1490 log.go:172] (0xc000104dc0) (0xc0005e4780) Stream added, broadcasting: 1\nI0212 13:25:34.577555 1490 log.go:172] (0xc000104dc0) Reply frame received for 1\nI0212 13:25:34.577714 1490 log.go:172] (0xc000104dc0) (0xc0005e4820) Create stream\nI0212 13:25:34.577728 1490 log.go:172] (0xc000104dc0) (0xc0005e4820) Stream added, broadcasting: 3\nI0212 13:25:34.580259 1490 log.go:172] (0xc000104dc0) Reply frame received for 3\nI0212 13:25:34.580303 1490 log.go:172] (0xc000104dc0) (0xc0008a4000) Create stream\nI0212 13:25:34.580312 1490 log.go:172] (0xc000104dc0) (0xc0008a4000) Stream added, broadcasting: 5\nI0212 13:25:34.582633 1490 log.go:172] (0xc000104dc0) Reply frame received for 5\nI0212 13:25:34.764418 1490 log.go:172] (0xc000104dc0) Data frame received for 3\nI0212 13:25:34.764598 1490 log.go:172] (0xc0005e4820) (3) Data frame handling\nI0212 13:25:34.764625 1490 log.go:172] (0xc0005e4820) (3) Data frame sent\nI0212 13:25:34.764673 1490 log.go:172] (0xc000104dc0) Data frame received for 5\nI0212 13:25:34.764702 1490 log.go:172] (0xc0008a4000) (5) Data frame handling\nI0212 13:25:34.764732 1490 log.go:172] (0xc0008a4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0212 13:25:34.861208 1490 log.go:172] (0xc000104dc0) Data frame received for 1\nI0212 13:25:34.861353 1490 log.go:172] (0xc000104dc0) (0xc0008a4000) Stream removed, broadcasting: 5\nI0212 13:25:34.861422 1490 log.go:172] (0xc0005e4780) (1) Data frame handling\nI0212 13:25:34.861438 1490 log.go:172] (0xc0005e4780) (1) Data frame sent\nI0212 13:25:34.861536 1490 log.go:172] (0xc000104dc0) (0xc0005e4820) Stream removed, broadcasting: 3\nI0212 13:25:34.861584 1490 log.go:172] (0xc000104dc0) (0xc0005e4780) Stream removed, broadcasting: 1\nI0212 13:25:34.861603 1490 log.go:172] (0xc000104dc0) Go away received\nI0212 13:25:34.863201 1490 log.go:172] (0xc000104dc0) (0xc0005e4780) Stream removed, broadcasting: 1\nI0212 13:25:34.863360 1490 log.go:172] (0xc000104dc0) (0xc0005e4820) Stream removed, broadcasting: 3\nI0212 13:25:34.863375 1490 log.go:172] (0xc000104dc0) (0xc0008a4000) Stream removed, broadcasting: 5\n" Feb 12 13:25:34.871: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 12 13:25:34.871: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 12 13:25:34.871: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Feb 12 13:26:04.925: INFO: Deleting all statefulset in ns statefulset-3552 Feb 12 13:26:04.931: INFO: Scaling statefulset ss to 0 Feb 12 13:26:04.942: INFO: Waiting for statefulset status.replicas updated to 0 Feb 12 13:26:04.945: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:26:04.970: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3552" for this suite. Feb 12 13:26:11.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:26:11.295: INFO: namespace statefulset-3552 deletion completed in 6.315038978s • [SLOW TEST:114.538 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:26:11.295: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 12 13:26:11.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-227' Feb 12 13:26:11.538: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 12 13:26:11.539: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Feb 12 13:26:11.550: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Feb 12 13:26:11.600: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Feb 12 13:26:11.664: INFO: scanned /root for discovery docs: Feb 12 13:26:11.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-227' Feb 12 13:26:35.253: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Feb 12 13:26:35.253: INFO: stdout: "Created e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f\nScaling up e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Feb 12 13:26:35.253: INFO: stdout: "Created e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f\nScaling up e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Feb 12 13:26:35.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-227' Feb 12 13:26:35.383: INFO: stderr: "" Feb 12 13:26:35.383: INFO: stdout: "e2e-test-nginx-rc-6nds5 e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f-c28vd " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 Feb 12 13:26:40.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-227' Feb 12 13:26:40.618: INFO: stderr: "" Feb 12 13:26:40.618: INFO: stdout: "e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f-c28vd " Feb 12 13:26:40.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f-c28vd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-227' Feb 12 13:26:40.765: INFO: stderr: "" Feb 12 13:26:40.765: INFO: stdout: "true" Feb 12 13:26:40.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f-c28vd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-227' Feb 12 13:26:40.888: INFO: stderr: "" Feb 12 13:26:40.888: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Feb 12 13:26:40.888: INFO: e2e-test-nginx-rc-e3b4497141e0dcd041472f342ea80a5f-c28vd is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Feb 12 13:26:40.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-227' Feb 12 13:26:41.041: INFO: stderr: "" Feb 12 13:26:41.042: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:26:41.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-227" for this suite. Feb 12 13:27:03.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:27:03.167: INFO: namespace kubectl-227 deletion completed in 22.111630507s • [SLOW TEST:51.871 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:27:03.167: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 12 13:27:13.877: INFO: Successfully updated pod "labelsupdate603740fc-3883-4c69-acc1-6e4d27cb2ae5" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:27:16.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6548" for this suite. Feb 12 13:27:38.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:27:38.195: INFO: namespace projected-6548 deletion completed in 22.164199116s • [SLOW TEST:35.028 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:27:38.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-e207788b-2fd8-4536-b8c1-3c9857f202fa STEP: Creating a pod to test consume secrets Feb 12 13:27:39.190: INFO: Waiting up to 5m0s for pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58" in namespace "secrets-616" to be "success or failure" Feb 12 13:27:39.233: INFO: Pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58": Phase="Pending", Reason="", readiness=false. Elapsed: 43.425132ms Feb 12 13:27:41.355: INFO: Pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165320172s Feb 12 13:27:43.370: INFO: Pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.179983762s Feb 12 13:27:45.380: INFO: Pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.190169755s Feb 12 13:27:47.396: INFO: Pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58": Phase="Running", Reason="", readiness=true. Elapsed: 8.205984794s Feb 12 13:27:49.405: INFO: Pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.215454247s STEP: Saw pod success Feb 12 13:27:49.405: INFO: Pod "pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58" satisfied condition "success or failure" Feb 12 13:27:49.418: INFO: Trying to get logs from node iruya-node pod pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58 container secret-volume-test: STEP: delete the pod Feb 12 13:27:49.517: INFO: Waiting for pod pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58 to disappear Feb 12 13:27:49.527: INFO: Pod pod-secrets-d1cf0e4f-8b9e-4703-a80e-61b005ce4f58 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:27:49.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-616" for this suite. Feb 12 13:27:55.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:27:55.715: INFO: namespace secrets-616 deletion completed in 6.180965928s • [SLOW TEST:17.520 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:27:55.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Feb 12 13:27:55.863: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073731,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 12 13:27:55.864: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073731,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Feb 12 13:28:05.885: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073745,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Feb 12 13:28:05.886: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073745,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Feb 12 13:28:15.918: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073758,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 12 13:28:15.919: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073758,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Feb 12 13:28:25.937: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073772,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 12 13:28:25.938: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-a,UID:78d2c396-4677-4671-aa87-9ef40aa4cb81,ResourceVersion:24073772,Generation:0,CreationTimestamp:2020-02-12 13:27:55 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Feb 12 13:28:35.958: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-b,UID:e20ccb37-1c8c-4cec-b284-827087874934,ResourceVersion:24073786,Generation:0,CreationTimestamp:2020-02-12 13:28:35 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 12 13:28:35.958: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-b,UID:e20ccb37-1c8c-4cec-b284-827087874934,ResourceVersion:24073786,Generation:0,CreationTimestamp:2020-02-12 13:28:35 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Feb 12 13:28:45.974: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-b,UID:e20ccb37-1c8c-4cec-b284-827087874934,ResourceVersion:24073800,Generation:0,CreationTimestamp:2020-02-12 13:28:35 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Feb 12 13:28:45.974: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-1678,SelfLink:/api/v1/namespaces/watch-1678/configmaps/e2e-watch-test-configmap-b,UID:e20ccb37-1c8c-4cec-b284-827087874934,ResourceVersion:24073800,Generation:0,CreationTimestamp:2020-02-12 13:28:35 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:28:55.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1678" for this suite. Feb 12 13:29:02.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:29:02.294: INFO: namespace watch-1678 deletion completed in 6.288575426s • [SLOW TEST:66.578 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:29:02.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-674dcefa-c6da-4aad-9aec-9607b0a15dd1 STEP: Creating a pod to test consume configMaps Feb 12 13:29:02.430: INFO: Waiting up to 5m0s for pod "pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd" in namespace "configmap-8960" to be "success or failure" Feb 12 13:29:02.440: INFO: Pod "pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 9.948935ms Feb 12 13:29:04.450: INFO: Pod "pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020096491s Feb 12 13:29:06.498: INFO: Pod "pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067707357s Feb 12 13:29:08.516: INFO: Pod "pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086037996s Feb 12 13:29:10.538: INFO: Pod "pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.107496867s STEP: Saw pod success Feb 12 13:29:10.538: INFO: Pod "pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd" satisfied condition "success or failure" Feb 12 13:29:10.544: INFO: Trying to get logs from node iruya-node pod pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd container configmap-volume-test: STEP: delete the pod Feb 12 13:29:10.622: INFO: Waiting for pod pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd to disappear Feb 12 13:29:10.687: INFO: Pod pod-configmaps-b14b6226-d8ae-4de7-ba73-61fa18c16bfd no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:29:10.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8960" for this suite. Feb 12 13:29:16.717: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:29:16.890: INFO: namespace configmap-8960 deletion completed in 6.195119397s • [SLOW TEST:14.596 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:29:16.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-4b8ca7ed-ecd4-4ade-9219-b4c7b85b7736 STEP: Creating a pod to test consume secrets Feb 12 13:29:17.130: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e" in namespace "projected-2548" to be "success or failure" Feb 12 13:29:17.183: INFO: Pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 53.040593ms Feb 12 13:29:19.192: INFO: Pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062078652s Feb 12 13:29:21.202: INFO: Pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071440807s Feb 12 13:29:23.217: INFO: Pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087048684s Feb 12 13:29:25.228: INFO: Pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.097893155s Feb 12 13:29:27.237: INFO: Pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.106992779s STEP: Saw pod success Feb 12 13:29:27.237: INFO: Pod "pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e" satisfied condition "success or failure" Feb 12 13:29:27.242: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e container projected-secret-volume-test: STEP: delete the pod Feb 12 13:29:27.365: INFO: Waiting for pod pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e to disappear Feb 12 13:29:27.388: INFO: Pod pod-projected-secrets-c07dc89d-d56b-4e3f-81f4-61b724066c2e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:29:27.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2548" for this suite. Feb 12 13:29:33.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:29:33.563: INFO: namespace projected-2548 deletion completed in 6.167299692s • [SLOW TEST:16.673 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:29:33.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Feb 12 13:29:33.709: INFO: Waiting up to 5m0s for pod "pod-72266e99-9127-4eea-abce-6012e85a6a16" in namespace "emptydir-2717" to be "success or failure" Feb 12 13:29:33.719: INFO: Pod "pod-72266e99-9127-4eea-abce-6012e85a6a16": Phase="Pending", Reason="", readiness=false. Elapsed: 10.088065ms Feb 12 13:29:35.727: INFO: Pod "pod-72266e99-9127-4eea-abce-6012e85a6a16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017918868s Feb 12 13:29:37.776: INFO: Pod "pod-72266e99-9127-4eea-abce-6012e85a6a16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06692319s Feb 12 13:29:39.794: INFO: Pod "pod-72266e99-9127-4eea-abce-6012e85a6a16": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085452507s Feb 12 13:29:41.804: INFO: Pod "pod-72266e99-9127-4eea-abce-6012e85a6a16": Phase="Running", Reason="", readiness=true. Elapsed: 8.095348058s Feb 12 13:29:43.816: INFO: Pod "pod-72266e99-9127-4eea-abce-6012e85a6a16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.10665636s STEP: Saw pod success Feb 12 13:29:43.816: INFO: Pod "pod-72266e99-9127-4eea-abce-6012e85a6a16" satisfied condition "success or failure" Feb 12 13:29:43.819: INFO: Trying to get logs from node iruya-node pod pod-72266e99-9127-4eea-abce-6012e85a6a16 container test-container: STEP: delete the pod Feb 12 13:29:43.921: INFO: Waiting for pod pod-72266e99-9127-4eea-abce-6012e85a6a16 to disappear Feb 12 13:29:43.937: INFO: Pod pod-72266e99-9127-4eea-abce-6012e85a6a16 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:29:43.937: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2717" for this suite. Feb 12 13:29:50.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:29:50.097: INFO: namespace emptydir-2717 deletion completed in 6.15438384s • [SLOW TEST:16.534 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:29:50.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-cd225ba9-0d4e-4c45-be32-9f924c7a2f8a STEP: Creating a pod to test consume secrets Feb 12 13:29:50.260: INFO: Waiting up to 5m0s for pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6" in namespace "secrets-3220" to be "success or failure" Feb 12 13:29:50.347: INFO: Pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 87.303074ms Feb 12 13:29:52.354: INFO: Pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094185232s Feb 12 13:29:54.361: INFO: Pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10127209s Feb 12 13:29:56.375: INFO: Pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114794392s Feb 12 13:29:58.385: INFO: Pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6": Phase="Running", Reason="", readiness=true. Elapsed: 8.124634293s Feb 12 13:30:00.393: INFO: Pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132911431s STEP: Saw pod success Feb 12 13:30:00.393: INFO: Pod "pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6" satisfied condition "success or failure" Feb 12 13:30:00.398: INFO: Trying to get logs from node iruya-node pod pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6 container secret-volume-test: STEP: delete the pod Feb 12 13:30:00.530: INFO: Waiting for pod pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6 to disappear Feb 12 13:30:00.540: INFO: Pod pod-secrets-f0d4aaf3-c216-4967-8a61-50bd07e29cc6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:30:00.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3220" for this suite. Feb 12 13:30:06.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:30:06.717: INFO: namespace secrets-3220 deletion completed in 6.170720267s • [SLOW TEST:16.619 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:30:06.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Feb 12 13:30:06.912: INFO: Waiting up to 5m0s for pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c" in namespace "emptydir-4899" to be "success or failure" Feb 12 13:30:06.953: INFO: Pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 40.176518ms Feb 12 13:30:08.978: INFO: Pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065328035s Feb 12 13:30:10.987: INFO: Pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073732815s Feb 12 13:30:13.045: INFO: Pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132477406s Feb 12 13:30:15.093: INFO: Pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.180204557s Feb 12 13:30:17.137: INFO: Pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.224476203s STEP: Saw pod success Feb 12 13:30:17.138: INFO: Pod "pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c" satisfied condition "success or failure" Feb 12 13:30:17.141: INFO: Trying to get logs from node iruya-node pod pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c container test-container: STEP: delete the pod Feb 12 13:30:17.214: INFO: Waiting for pod pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c to disappear Feb 12 13:30:17.232: INFO: Pod pod-239d8429-f6a8-47b6-a0e8-9c916a936a3c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:30:17.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4899" for this suite. Feb 12 13:30:23.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:30:23.583: INFO: namespace emptydir-4899 deletion completed in 6.258801609s • [SLOW TEST:16.866 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:30:23.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0212 13:30:27.287694 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 12 13:30:27.287: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:30:27.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4055" for this suite. Feb 12 13:30:33.655: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:30:33.809: INFO: namespace gc-4055 deletion completed in 6.515526947s • [SLOW TEST:10.225 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:30:33.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-edc4bd4d-3cf3-4e92-be8f-bc3281fbd80e STEP: Creating a pod to test consume secrets Feb 12 13:30:33.946: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1" in namespace "projected-4793" to be "success or failure" Feb 12 13:30:33.968: INFO: Pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 21.935132ms Feb 12 13:30:35.983: INFO: Pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036426905s Feb 12 13:30:37.992: INFO: Pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045439972s Feb 12 13:30:39.999: INFO: Pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052948211s Feb 12 13:30:42.013: INFO: Pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066845054s Feb 12 13:30:44.028: INFO: Pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081081724s STEP: Saw pod success Feb 12 13:30:44.028: INFO: Pod "pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1" satisfied condition "success or failure" Feb 12 13:30:44.035: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1 container projected-secret-volume-test: STEP: delete the pod Feb 12 13:30:44.111: INFO: Waiting for pod pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1 to disappear Feb 12 13:30:44.257: INFO: Pod pod-projected-secrets-1b2619c1-4325-4450-8c1f-699bc4ece1b1 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:30:44.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4793" for this suite. Feb 12 13:30:50.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:30:50.506: INFO: namespace projected-4793 deletion completed in 6.233098611s • [SLOW TEST:16.697 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:30:50.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Feb 12 13:30:50.652: INFO: Waiting up to 5m0s for pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f" in namespace "containers-307" to be "success or failure" Feb 12 13:30:50.664: INFO: Pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.125109ms Feb 12 13:30:52.674: INFO: Pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022090292s Feb 12 13:30:54.689: INFO: Pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037161861s Feb 12 13:30:56.704: INFO: Pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051907191s Feb 12 13:30:58.713: INFO: Pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.061211522s Feb 12 13:31:00.721: INFO: Pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069137461s STEP: Saw pod success Feb 12 13:31:00.721: INFO: Pod "client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f" satisfied condition "success or failure" Feb 12 13:31:00.726: INFO: Trying to get logs from node iruya-node pod client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f container test-container: STEP: delete the pod Feb 12 13:31:00.788: INFO: Waiting for pod client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f to disappear Feb 12 13:31:00.798: INFO: Pod client-containers-62902ad6-16ed-4f34-b7cc-bbe7e8068f3f no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:31:00.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-307" for this suite. Feb 12 13:31:06.841: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:31:07.005: INFO: namespace containers-307 deletion completed in 6.197107837s • [SLOW TEST:16.498 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:31:07.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 12 13:31:07.200: INFO: Pod name rollover-pod: Found 0 pods out of 1 Feb 12 13:31:12.208: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Feb 12 13:31:14.217: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Feb 12 13:31:16.227: INFO: Creating deployment "test-rollover-deployment" Feb 12 13:31:16.251: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Feb 12 13:31:18.265: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Feb 12 13:31:18.275: INFO: Ensure that both replica sets have 1 created replica Feb 12 13:31:18.282: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Feb 12 13:31:18.291: INFO: Updating deployment test-rollover-deployment Feb 12 13:31:18.291: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Feb 12 13:31:20.317: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Feb 12 13:31:20.348: INFO: Make sure deployment "test-rollover-deployment" is complete Feb 12 13:31:20.354: INFO: all replica sets need to contain the pod-template-hash label Feb 12 13:31:20.354: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111078, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 13:31:22.373: INFO: all replica sets need to contain the pod-template-hash label Feb 12 13:31:22.373: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111078, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 13:31:24.662: INFO: all replica sets need to contain the pod-template-hash label Feb 12 13:31:24.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111078, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 13:31:26.374: INFO: all replica sets need to contain the pod-template-hash label Feb 12 13:31:26.375: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111078, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 13:31:28.372: INFO: all replica sets need to contain the pod-template-hash label Feb 12 13:31:28.373: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111087, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 13:31:30.373: INFO: all replica sets need to contain the pod-template-hash label Feb 12 13:31:30.373: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111087, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 13:31:32.374: INFO: all replica sets need to contain the pod-template-hash label Feb 12 13:31:32.374: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111087, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 13:31:34.372: INFO: all replica sets need to contain the pod-template-hash label Feb 12 13:31:34.373: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111087, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 13:31:36.368: INFO: all replica sets need to contain the pod-template-hash label Feb 12 13:31:36.369: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111087, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 13:31:38.391: INFO: Feb 12 13:31:38.391: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111098, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111076, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 13:31:40.486: INFO: Feb 12 13:31:40.486: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Feb 12 13:31:40.515: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-8605,SelfLink:/apis/apps/v1/namespaces/deployment-8605/deployments/test-rollover-deployment,UID:b72a898a-fd1e-45d9-ba2a-dcbd43705bf4,ResourceVersion:24074308,Generation:2,CreationTimestamp:2020-02-12 13:31:16 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-02-12 13:31:16 +0000 UTC 2020-02-12 13:31:16 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-02-12 13:31:38 +0000 UTC 2020-02-12 13:31:16 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Feb 12 13:31:40.527: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-8605,SelfLink:/apis/apps/v1/namespaces/deployment-8605/replicasets/test-rollover-deployment-854595fc44,UID:a45f9884-ff03-4ba8-b304-5b338f4c4508,ResourceVersion:24074297,Generation:2,CreationTimestamp:2020-02-12 13:31:18 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b72a898a-fd1e-45d9-ba2a-dcbd43705bf4 0xc0027b28c7 0xc0027b28c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Feb 12 13:31:40.527: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Feb 12 13:31:40.527: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-8605,SelfLink:/apis/apps/v1/namespaces/deployment-8605/replicasets/test-rollover-controller,UID:8a376524-1c7e-48ec-ab9b-96c8792c3420,ResourceVersion:24074307,Generation:2,CreationTimestamp:2020-02-12 13:31:07 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b72a898a-fd1e-45d9-ba2a-dcbd43705bf4 0xc0027b27f7 0xc0027b27f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 12 13:31:40.527: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-8605,SelfLink:/apis/apps/v1/namespaces/deployment-8605/replicasets/test-rollover-deployment-9b8b997cf,UID:1620bc1d-b07b-42ae-b784-c69f9994cf68,ResourceVersion:24074263,Generation:2,CreationTimestamp:2020-02-12 13:31:16 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment b72a898a-fd1e-45d9-ba2a-dcbd43705bf4 0xc0027b2990 0xc0027b2991}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Feb 12 13:31:40.533: INFO: Pod "test-rollover-deployment-854595fc44-grhns" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-grhns,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-8605,SelfLink:/api/v1/namespaces/deployment-8605/pods/test-rollover-deployment-854595fc44-grhns,UID:f7373f0e-e2f7-4e9a-85cf-24ae6892ce5e,ResourceVersion:24074282,Generation:0,CreationTimestamp:2020-02-12 13:31:18 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 a45f9884-ff03-4ba8-b304-5b338f4c4508 0xc0027b3587 0xc0027b3588}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-qmnc8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-qmnc8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-qmnc8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0027b35f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0027b3610}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:31:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:31:27 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:31:27 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 13:31:18 +0000 UTC }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-02-12 13:31:18 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-02-12 13:31:26 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://e11dd11f43296079b16f9ffd073ff0ed25f445f14bd1618f17622d635d401f3c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:31:40.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8605" for this suite. Feb 12 13:31:49.385: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:31:49.879: INFO: namespace deployment-8605 deletion completed in 9.341420258s • [SLOW TEST:42.874 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:31:49.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 12 13:31:50.064: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7030,SelfLink:/api/v1/namespaces/watch-7030/configmaps/e2e-watch-test-resource-version,UID:ff60632f-d441-4b66-9e04-521ae04d5c32,ResourceVersion:24074366,Generation:0,CreationTimestamp:2020-02-12 13:31:50 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Feb 12 13:31:50.064: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-7030,SelfLink:/api/v1/namespaces/watch-7030/configmaps/e2e-watch-test-resource-version,UID:ff60632f-d441-4b66-9e04-521ae04d5c32,ResourceVersion:24074367,Generation:0,CreationTimestamp:2020-02-12 13:31:50 +0000 UTC,DeletionTimestamp: ,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:31:50.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7030" for this suite. Feb 12 13:31:56.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:31:56.312: INFO: namespace watch-7030 deletion completed in 6.170711911s • [SLOW TEST:6.432 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:31:56.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 12 13:31:56.417: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29" in namespace "downward-api-3882" to be "success or failure" Feb 12 13:31:56.438: INFO: Pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29": Phase="Pending", Reason="", readiness=false. Elapsed: 20.887475ms Feb 12 13:31:58.450: INFO: Pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033187001s Feb 12 13:32:00.460: INFO: Pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043426878s Feb 12 13:32:02.474: INFO: Pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056927938s Feb 12 13:32:04.484: INFO: Pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066893495s Feb 12 13:32:06.503: INFO: Pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086312656s STEP: Saw pod success Feb 12 13:32:06.504: INFO: Pod "downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29" satisfied condition "success or failure" Feb 12 13:32:06.516: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29 container client-container: STEP: delete the pod Feb 12 13:32:06.566: INFO: Waiting for pod downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29 to disappear Feb 12 13:32:06.629: INFO: Pod downwardapi-volume-2337e22b-7455-420e-9231-0df47c1cfb29 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:32:06.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3882" for this suite. Feb 12 13:32:12.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:32:12.847: INFO: namespace downward-api-3882 deletion completed in 6.20920721s • [SLOW TEST:16.535 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:32:12.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-5ebb989a-405f-4598-9b46-8bb486c06e71 STEP: Creating a pod to test consume configMaps Feb 12 13:32:13.006: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e" in namespace "projected-56" to be "success or failure" Feb 12 13:32:13.016: INFO: Pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.289221ms Feb 12 13:32:15.021: INFO: Pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014718622s Feb 12 13:32:17.040: INFO: Pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033106314s Feb 12 13:32:19.047: INFO: Pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040546734s Feb 12 13:32:21.149: INFO: Pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142670805s Feb 12 13:32:23.157: INFO: Pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.150494907s STEP: Saw pod success Feb 12 13:32:23.157: INFO: Pod "pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e" satisfied condition "success or failure" Feb 12 13:32:23.165: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e container projected-configmap-volume-test: STEP: delete the pod Feb 12 13:32:23.293: INFO: Waiting for pod pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e to disappear Feb 12 13:32:23.296: INFO: Pod pod-projected-configmaps-b8a57cf9-d490-4231-ba80-9f97b895a50e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:32:23.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-56" for this suite. Feb 12 13:32:29.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:32:29.523: INFO: namespace projected-56 deletion completed in 6.22254688s • [SLOW TEST:16.675 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:32:29.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Feb 12 13:32:29.627: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Feb 12 13:32:29.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1651' Feb 12 13:32:30.372: INFO: stderr: "" Feb 12 13:32:30.373: INFO: stdout: "service/redis-slave created\n" Feb 12 13:32:30.373: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Feb 12 13:32:30.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1651' Feb 12 13:32:30.911: INFO: stderr: "" Feb 12 13:32:30.912: INFO: stdout: "service/redis-master created\n" Feb 12 13:32:30.913: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Feb 12 13:32:30.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1651' Feb 12 13:32:31.548: INFO: stderr: "" Feb 12 13:32:31.548: INFO: stdout: "service/frontend created\n" Feb 12 13:32:31.548: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Feb 12 13:32:31.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1651' Feb 12 13:32:32.053: INFO: stderr: "" Feb 12 13:32:32.053: INFO: stdout: "deployment.apps/frontend created\n" Feb 12 13:32:32.053: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Feb 12 13:32:32.054: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1651' Feb 12 13:32:32.720: INFO: stderr: "" Feb 12 13:32:32.720: INFO: stdout: "deployment.apps/redis-master created\n" Feb 12 13:32:32.721: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Feb 12 13:32:32.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1651' Feb 12 13:32:33.917: INFO: stderr: "" Feb 12 13:32:33.917: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Feb 12 13:32:33.917: INFO: Waiting for all frontend pods to be Running. Feb 12 13:32:58.969: INFO: Waiting for frontend to serve content. Feb 12 13:32:59.036: INFO: Trying to add a new entry to the guestbook. Feb 12 13:32:59.088: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Feb 12 13:32:59.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1651' Feb 12 13:32:59.321: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 12 13:32:59.321: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Feb 12 13:32:59.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1651' Feb 12 13:32:59.475: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 12 13:32:59.476: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 12 13:32:59.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1651' Feb 12 13:32:59.657: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 12 13:32:59.657: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 12 13:32:59.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1651' Feb 12 13:32:59.767: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 12 13:32:59.767: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Feb 12 13:32:59.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1651' Feb 12 13:32:59.910: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 12 13:32:59.910: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Feb 12 13:32:59.911: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1651' Feb 12 13:33:00.211: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Feb 12 13:33:00.211: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:33:00.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1651" for this suite. Feb 12 13:33:40.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:33:40.924: INFO: namespace kubectl-1651 deletion completed in 40.690433986s • [SLOW TEST:71.400 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:33:40.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Feb 12 13:33:41.558: INFO: created pod pod-service-account-defaultsa Feb 12 13:33:41.559: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 12 13:33:41.573: INFO: created pod pod-service-account-mountsa Feb 12 13:33:41.574: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 12 13:33:41.707: INFO: created pod pod-service-account-nomountsa Feb 12 13:33:41.707: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 12 13:33:41.723: INFO: created pod pod-service-account-defaultsa-mountspec Feb 12 13:33:41.723: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 12 13:33:41.859: INFO: created pod pod-service-account-mountsa-mountspec Feb 12 13:33:41.859: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 12 13:33:41.930: INFO: created pod pod-service-account-nomountsa-mountspec Feb 12 13:33:41.930: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 12 13:33:42.114: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 12 13:33:42.114: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 12 13:33:42.143: INFO: created pod pod-service-account-mountsa-nomountspec Feb 12 13:33:42.143: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 12 13:33:42.208: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 12 13:33:42.208: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:33:42.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-1991" for this suite. Feb 12 13:34:22.997: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:34:23.134: INFO: namespace svcaccounts-1991 deletion completed in 39.479932903s • [SLOW TEST:42.209 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:34:23.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-61d8b92a-93b3-43bd-a70e-6c6522f84e7b in namespace container-probe-140 Feb 12 13:34:33.380: INFO: Started pod liveness-61d8b92a-93b3-43bd-a70e-6c6522f84e7b in namespace container-probe-140 STEP: checking the pod's current state and verifying that restartCount is present Feb 12 13:34:33.384: INFO: Initial restart count of pod liveness-61d8b92a-93b3-43bd-a70e-6c6522f84e7b is 0 Feb 12 13:34:57.632: INFO: Restart count of pod container-probe-140/liveness-61d8b92a-93b3-43bd-a70e-6c6522f84e7b is now 1 (24.248719127s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:34:57.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-140" for this suite. Feb 12 13:35:03.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:35:03.956: INFO: namespace container-probe-140 deletion completed in 6.210527241s • [SLOW TEST:40.822 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:35:03.956: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Feb 12 13:35:12.128: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Feb 12 13:35:27.317: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:35:27.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6053" for this suite. Feb 12 13:35:33.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:35:33.523: INFO: namespace pods-6053 deletion completed in 6.187402543s • [SLOW TEST:29.567 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:35:33.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0212 13:36:03.757348 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 12 13:36:03.757: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:36:03.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6181" for this suite. Feb 12 13:36:09.835: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:36:09.931: INFO: namespace gc-6181 deletion completed in 6.168460117s • [SLOW TEST:36.408 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:36:09.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-b7b1eff1-a32d-4276-b8a8-557f44a57c8f STEP: Creating a pod to test consume configMaps Feb 12 13:36:11.375: INFO: Waiting up to 5m0s for pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25" in namespace "configmap-6881" to be "success or failure" Feb 12 13:36:11.483: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25": Phase="Pending", Reason="", readiness=false. Elapsed: 107.929012ms Feb 12 13:36:13.527: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.152237907s Feb 12 13:36:15.536: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161546116s Feb 12 13:36:17.545: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169899329s Feb 12 13:36:19.558: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25": Phase="Pending", Reason="", readiness=false. Elapsed: 8.183541613s Feb 12 13:36:21.566: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25": Phase="Running", Reason="", readiness=true. Elapsed: 10.191305755s Feb 12 13:36:23.576: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.201268027s STEP: Saw pod success Feb 12 13:36:23.576: INFO: Pod "pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25" satisfied condition "success or failure" Feb 12 13:36:23.582: INFO: Trying to get logs from node iruya-node pod pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25 container configmap-volume-test: STEP: delete the pod Feb 12 13:36:23.731: INFO: Waiting for pod pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25 to disappear Feb 12 13:36:23.739: INFO: Pod pod-configmaps-458d5682-af2c-4200-8efd-938bb238fc25 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:36:23.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6881" for this suite. Feb 12 13:36:29.783: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:36:29.922: INFO: namespace configmap-6881 deletion completed in 6.173043715s • [SLOW TEST:19.990 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:36:29.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Feb 12 13:36:30.070: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Feb 12 13:36:30.598: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Feb 12 13:36:33.156: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 13:36:35.211: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 13:36:37.163: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 13:36:39.166: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717111390, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 12 13:36:42.244: INFO: Waited 1.05414277s for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:36:42.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7885" for this suite. Feb 12 13:36:49.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:36:49.207: INFO: namespace aggregator-7885 deletion completed in 6.353973555s • [SLOW TEST:19.285 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:36:49.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-839dc37a-3a84-4a06-a8b2-71ca6a7b0f1a STEP: Creating a pod to test consume secrets Feb 12 13:36:49.300: INFO: Waiting up to 5m0s for pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58" in namespace "secrets-5858" to be "success or failure" Feb 12 13:36:49.310: INFO: Pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58": Phase="Pending", Reason="", readiness=false. Elapsed: 10.05665ms Feb 12 13:36:51.317: INFO: Pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016669806s Feb 12 13:36:53.325: INFO: Pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0253474s Feb 12 13:36:55.332: INFO: Pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032618758s Feb 12 13:36:57.345: INFO: Pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045550022s Feb 12 13:36:59.353: INFO: Pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053071924s STEP: Saw pod success Feb 12 13:36:59.353: INFO: Pod "pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58" satisfied condition "success or failure" Feb 12 13:36:59.358: INFO: Trying to get logs from node iruya-node pod pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58 container secret-volume-test: STEP: delete the pod Feb 12 13:36:59.425: INFO: Waiting for pod pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58 to disappear Feb 12 13:36:59.435: INFO: Pod pod-secrets-b72f8974-410b-4360-94e9-06fcbcb93e58 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:36:59.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5858" for this suite. Feb 12 13:37:05.535: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:37:05.657: INFO: namespace secrets-5858 deletion completed in 6.212891139s • [SLOW TEST:16.450 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:37:05.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 12 13:37:05.795: INFO: Waiting up to 5m0s for pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34" in namespace "downward-api-3465" to be "success or failure" Feb 12 13:37:05.816: INFO: Pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34": Phase="Pending", Reason="", readiness=false. Elapsed: 21.049202ms Feb 12 13:37:07.833: INFO: Pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037246895s Feb 12 13:37:09.865: INFO: Pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069233724s Feb 12 13:37:11.881: INFO: Pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.085684301s Feb 12 13:37:13.901: INFO: Pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34": Phase="Pending", Reason="", readiness=false. Elapsed: 8.105356501s Feb 12 13:37:15.909: INFO: Pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.113139702s STEP: Saw pod success Feb 12 13:37:15.909: INFO: Pod "downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34" satisfied condition "success or failure" Feb 12 13:37:15.913: INFO: Trying to get logs from node iruya-node pod downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34 container dapi-container: STEP: delete the pod Feb 12 13:37:15.985: INFO: Waiting for pod downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34 to disappear Feb 12 13:37:15.991: INFO: Pod downward-api-d5eab80e-0bc1-43f0-ba68-7039b880eb34 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:37:15.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3465" for this suite. Feb 12 13:37:22.024: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:37:22.172: INFO: namespace downward-api-3465 deletion completed in 6.174525189s • [SLOW TEST:16.515 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:37:22.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-2a3ed00d-b5c7-4905-ab4d-89e4c53a856d STEP: Creating a pod to test consume configMaps Feb 12 13:37:22.357: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f" in namespace "configmap-7367" to be "success or failure" Feb 12 13:37:22.394: INFO: Pod "pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 36.723413ms Feb 12 13:37:24.401: INFO: Pod "pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043888175s Feb 12 13:37:26.426: INFO: Pod "pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069052924s Feb 12 13:37:28.431: INFO: Pod "pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073924072s Feb 12 13:37:30.440: INFO: Pod "pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082974225s STEP: Saw pod success Feb 12 13:37:30.440: INFO: Pod "pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f" satisfied condition "success or failure" Feb 12 13:37:30.446: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f container configmap-volume-test: STEP: delete the pod Feb 12 13:37:30.584: INFO: Waiting for pod pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f to disappear Feb 12 13:37:30.593: INFO: Pod pod-configmaps-9e52ee81-c0b3-46d4-82e2-996494392b8f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:37:30.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7367" for this suite. Feb 12 13:37:36.625: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:37:36.734: INFO: namespace configmap-7367 deletion completed in 6.133802708s • [SLOW TEST:14.561 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:37:36.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 12 13:37:48.061: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:37:48.260: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9298" for this suite. Feb 12 13:37:54.288: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:37:54.415: INFO: namespace container-runtime-9298 deletion completed in 6.1484855s • [SLOW TEST:17.681 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:37:54.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 12 13:37:54.618: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"e4feee08-4392-471e-9361-91479838a997", Controller:(*bool)(0xc0015d663a), BlockOwnerDeletion:(*bool)(0xc0015d663b)}} Feb 12 13:37:54.638: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"7903ff4a-faed-403e-ab5a-672dea838f3e", Controller:(*bool)(0xc002b8e73a), BlockOwnerDeletion:(*bool)(0xc002b8e73b)}} Feb 12 13:37:54.775: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2bad35f1-0325-470d-93e6-ff0683bac86b", Controller:(*bool)(0xc0015d67fa), BlockOwnerDeletion:(*bool)(0xc0015d67fb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:37:59.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7290" for this suite. Feb 12 13:38:05.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:38:06.059: INFO: namespace gc-7290 deletion completed in 6.188794907s • [SLOW TEST:11.644 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:38:06.059: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Feb 12 13:38:06.195: INFO: Waiting up to 5m0s for pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898" in namespace "emptydir-7639" to be "success or failure" Feb 12 13:38:06.222: INFO: Pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898": Phase="Pending", Reason="", readiness=false. Elapsed: 25.995659ms Feb 12 13:38:08.232: INFO: Pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036202417s Feb 12 13:38:10.242: INFO: Pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045835698s Feb 12 13:38:12.260: INFO: Pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06431553s Feb 12 13:38:14.273: INFO: Pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077819943s Feb 12 13:38:16.282: INFO: Pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.086228302s STEP: Saw pod success Feb 12 13:38:16.282: INFO: Pod "pod-468e4b5c-15e7-4945-b6ed-764633c57898" satisfied condition "success or failure" Feb 12 13:38:16.288: INFO: Trying to get logs from node iruya-node pod pod-468e4b5c-15e7-4945-b6ed-764633c57898 container test-container: STEP: delete the pod Feb 12 13:38:16.546: INFO: Waiting for pod pod-468e4b5c-15e7-4945-b6ed-764633c57898 to disappear Feb 12 13:38:16.566: INFO: Pod pod-468e4b5c-15e7-4945-b6ed-764633c57898 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:38:16.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7639" for this suite. Feb 12 13:38:22.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:38:22.877: INFO: namespace emptydir-7639 deletion completed in 6.302622207s • [SLOW TEST:16.818 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:38:22.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-64e92925-fce4-4779-bf7e-bea91e9484fe STEP: Creating a pod to test consume secrets Feb 12 13:38:23.007: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b" in namespace "projected-5588" to be "success or failure" Feb 12 13:38:23.027: INFO: Pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.989357ms Feb 12 13:38:25.297: INFO: Pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.290125494s Feb 12 13:38:27.307: INFO: Pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.299256575s Feb 12 13:38:29.315: INFO: Pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.30748499s Feb 12 13:38:31.323: INFO: Pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.31599672s Feb 12 13:38:33.332: INFO: Pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.324359049s STEP: Saw pod success Feb 12 13:38:33.332: INFO: Pod "pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b" satisfied condition "success or failure" Feb 12 13:38:33.335: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b container projected-secret-volume-test: STEP: delete the pod Feb 12 13:38:33.410: INFO: Waiting for pod pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b to disappear Feb 12 13:38:33.450: INFO: Pod pod-projected-secrets-8aeba1a1-9d3e-44a3-806f-f287f947381b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:38:33.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5588" for this suite. Feb 12 13:38:39.493: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:38:39.627: INFO: namespace projected-5588 deletion completed in 6.15832185s • [SLOW TEST:16.749 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:38:39.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-946965b2-caf2-400b-8968-55b41f81fa61 STEP: Creating a pod to test consume secrets Feb 12 13:38:39.771: INFO: Waiting up to 5m0s for pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190" in namespace "secrets-6750" to be "success or failure" Feb 12 13:38:39.778: INFO: Pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190": Phase="Pending", Reason="", readiness=false. Elapsed: 6.626194ms Feb 12 13:38:41.798: INFO: Pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026713956s Feb 12 13:38:43.815: INFO: Pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044030261s Feb 12 13:38:45.824: INFO: Pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053128921s Feb 12 13:38:47.832: INFO: Pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060550379s Feb 12 13:38:49.839: INFO: Pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.067479517s STEP: Saw pod success Feb 12 13:38:49.839: INFO: Pod "pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190" satisfied condition "success or failure" Feb 12 13:38:49.841: INFO: Trying to get logs from node iruya-node pod pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190 container secret-volume-test: STEP: delete the pod Feb 12 13:38:50.296: INFO: Waiting for pod pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190 to disappear Feb 12 13:38:50.404: INFO: Pod pod-secrets-3461e739-5407-4b43-ad4b-61e14f068190 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:38:50.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6750" for this suite. Feb 12 13:38:56.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:38:56.644: INFO: namespace secrets-6750 deletion completed in 6.230376439s • [SLOW TEST:17.017 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:38:56.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-6206/secret-test-df04230b-add1-4842-bfbb-b51e0506648f STEP: Creating a pod to test consume secrets Feb 12 13:38:56.803: INFO: Waiting up to 5m0s for pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1" in namespace "secrets-6206" to be "success or failure" Feb 12 13:38:56.810: INFO: Pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.341014ms Feb 12 13:38:58.817: INFO: Pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013515998s Feb 12 13:39:00.825: INFO: Pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022337306s Feb 12 13:39:02.867: INFO: Pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063500617s Feb 12 13:39:04.876: INFO: Pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073396494s Feb 12 13:39:06.886: INFO: Pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.082764403s STEP: Saw pod success Feb 12 13:39:06.886: INFO: Pod "pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1" satisfied condition "success or failure" Feb 12 13:39:06.889: INFO: Trying to get logs from node iruya-node pod pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1 container env-test: STEP: delete the pod Feb 12 13:39:07.012: INFO: Waiting for pod pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1 to disappear Feb 12 13:39:07.032: INFO: Pod pod-configmaps-ecdcaf10-6089-4ebf-8d12-691e055b47e1 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:39:07.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6206" for this suite. Feb 12 13:39:13.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:39:13.153: INFO: namespace secrets-6206 deletion completed in 6.11522247s • [SLOW TEST:16.508 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:39:13.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Feb 12 13:39:13.195: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 12 13:39:15.502: INFO: stderr: "" Feb 12 13:39:15.502: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:39:15.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4276" for this suite. Feb 12 13:39:21.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:39:21.659: INFO: namespace kubectl-4276 deletion completed in 6.148835711s • [SLOW TEST:8.506 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:39:21.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-a443f14a-28a5-4dbb-9f38-586fcfb09d15 STEP: Creating a pod to test consume configMaps Feb 12 13:39:21.806: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0" in namespace "projected-53" to be "success or failure" Feb 12 13:39:21.822: INFO: Pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0": Phase="Pending", Reason="", readiness=false. Elapsed: 15.169605ms Feb 12 13:39:23.839: INFO: Pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032837657s Feb 12 13:39:25.853: INFO: Pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046246932s Feb 12 13:39:27.865: INFO: Pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058087195s Feb 12 13:39:29.874: INFO: Pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067094613s Feb 12 13:39:31.883: INFO: Pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076915115s STEP: Saw pod success Feb 12 13:39:31.884: INFO: Pod "pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0" satisfied condition "success or failure" Feb 12 13:39:31.886: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0 container projected-configmap-volume-test: STEP: delete the pod Feb 12 13:39:31.936: INFO: Waiting for pod pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0 to disappear Feb 12 13:39:31.948: INFO: Pod pod-projected-configmaps-f132c624-65ba-496b-bf18-021b570229e0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:39:31.948: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-53" for this suite. Feb 12 13:39:38.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:39:38.230: INFO: namespace projected-53 deletion completed in 6.261531286s • [SLOW TEST:16.571 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:39:38.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Feb 12 13:39:38.388: INFO: Waiting up to 5m0s for pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01" in namespace "containers-3330" to be "success or failure" Feb 12 13:39:38.408: INFO: Pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01": Phase="Pending", Reason="", readiness=false. Elapsed: 20.406503ms Feb 12 13:39:40.424: INFO: Pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036141728s Feb 12 13:39:42.431: INFO: Pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043756219s Feb 12 13:39:44.446: INFO: Pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058217787s Feb 12 13:39:46.462: INFO: Pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073992837s Feb 12 13:39:48.472: INFO: Pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084249598s STEP: Saw pod success Feb 12 13:39:48.472: INFO: Pod "client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01" satisfied condition "success or failure" Feb 12 13:39:48.477: INFO: Trying to get logs from node iruya-node pod client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01 container test-container: STEP: delete the pod Feb 12 13:39:48.623: INFO: Waiting for pod client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01 to disappear Feb 12 13:39:48.628: INFO: Pod client-containers-076e9e3a-cbab-431b-9b86-5e3dcfbaee01 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:39:48.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3330" for this suite. Feb 12 13:39:54.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:39:54.829: INFO: namespace containers-3330 deletion completed in 6.195288562s • [SLOW TEST:16.599 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:39:54.830: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:40:06.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7120" for this suite. Feb 12 13:40:28.118: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:40:28.269: INFO: namespace replication-controller-7120 deletion completed in 22.224605815s • [SLOW TEST:33.440 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:40:28.270: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:41:28.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2458" for this suite. Feb 12 13:41:50.440: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:41:50.518: INFO: namespace container-probe-2458 deletion completed in 22.12566744s • [SLOW TEST:82.249 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:41:50.521: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Feb 12 13:41:50.705: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 12 13:41:50.758: INFO: Waiting for terminating namespaces to be deleted... Feb 12 13:41:50.761: INFO: Logging pods the kubelet thinks is on node iruya-node before test Feb 12 13:41:50.784: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Feb 12 13:41:50.785: INFO: Container weave ready: true, restart count 0 Feb 12 13:41:50.785: INFO: Container weave-npc ready: true, restart count 0 Feb 12 13:41:50.785: INFO: kube-bench-j7kcs from default started at 2020-02-11 06:42:30 +0000 UTC (1 container statuses recorded) Feb 12 13:41:50.785: INFO: Container kube-bench ready: false, restart count 0 Feb 12 13:41:50.785: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Feb 12 13:41:50.785: INFO: Container kube-proxy ready: true, restart count 0 Feb 12 13:41:50.785: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Feb 12 13:41:50.886: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Feb 12 13:41:50.886: INFO: Container etcd ready: true, restart count 0 Feb 12 13:41:50.886: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Feb 12 13:41:50.886: INFO: Container weave ready: true, restart count 0 Feb 12 13:41:50.886: INFO: Container weave-npc ready: true, restart count 0 Feb 12 13:41:50.886: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 12 13:41:50.886: INFO: Container coredns ready: true, restart count 0 Feb 12 13:41:50.886: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Feb 12 13:41:50.886: INFO: Container kube-controller-manager ready: true, restart count 21 Feb 12 13:41:50.886: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Feb 12 13:41:50.886: INFO: Container kube-proxy ready: true, restart count 0 Feb 12 13:41:50.886: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Feb 12 13:41:50.886: INFO: Container kube-apiserver ready: true, restart count 0 Feb 12 13:41:50.886: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Feb 12 13:41:50.886: INFO: Container kube-scheduler ready: true, restart count 13 Feb 12 13:41:50.886: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Feb 12 13:41:50.886: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f2ab6ea958f335], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:41:51.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4809" for this suite. Feb 12 13:41:57.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:41:58.075: INFO: namespace sched-pred-4809 deletion completed in 6.125897306s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.554 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:41:58.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5689 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 12 13:41:58.235: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 12 13:42:36.497: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-5689 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 13:42:36.497: INFO: >>> kubeConfig: /root/.kube/config I0212 13:42:36.584544 8 log.go:172] (0xc000ccf600) (0xc0025de140) Create stream I0212 13:42:36.584723 8 log.go:172] (0xc000ccf600) (0xc0025de140) Stream added, broadcasting: 1 I0212 13:42:36.603817 8 log.go:172] (0xc000ccf600) Reply frame received for 1 I0212 13:42:36.603942 8 log.go:172] (0xc000ccf600) (0xc0025de1e0) Create stream I0212 13:42:36.603954 8 log.go:172] (0xc000ccf600) (0xc0025de1e0) Stream added, broadcasting: 3 I0212 13:42:36.628440 8 log.go:172] (0xc000ccf600) Reply frame received for 3 I0212 13:42:36.628481 8 log.go:172] (0xc000ccf600) (0xc0025de280) Create stream I0212 13:42:36.628495 8 log.go:172] (0xc000ccf600) (0xc0025de280) Stream added, broadcasting: 5 I0212 13:42:36.630896 8 log.go:172] (0xc000ccf600) Reply frame received for 5 I0212 13:42:36.844850 8 log.go:172] (0xc000ccf600) Data frame received for 3 I0212 13:42:36.844918 8 log.go:172] (0xc0025de1e0) (3) Data frame handling I0212 13:42:36.844945 8 log.go:172] (0xc0025de1e0) (3) Data frame sent I0212 13:42:37.015690 8 log.go:172] (0xc000ccf600) (0xc0025de1e0) Stream removed, broadcasting: 3 I0212 13:42:37.015920 8 log.go:172] (0xc000ccf600) Data frame received for 1 I0212 13:42:37.016094 8 log.go:172] (0xc000ccf600) (0xc0025de280) Stream removed, broadcasting: 5 I0212 13:42:37.016154 8 log.go:172] (0xc0025de140) (1) Data frame handling I0212 13:42:37.016196 8 log.go:172] (0xc0025de140) (1) Data frame sent I0212 13:42:37.016209 8 log.go:172] (0xc000ccf600) (0xc0025de140) Stream removed, broadcasting: 1 I0212 13:42:37.016257 8 log.go:172] (0xc000ccf600) Go away received I0212 13:42:37.016670 8 log.go:172] (0xc000ccf600) (0xc0025de140) Stream removed, broadcasting: 1 I0212 13:42:37.016728 8 log.go:172] (0xc000ccf600) (0xc0025de1e0) Stream removed, broadcasting: 3 I0212 13:42:37.016744 8 log.go:172] (0xc000ccf600) (0xc0025de280) Stream removed, broadcasting: 5 Feb 12 13:42:37.016: INFO: Waiting for endpoints: map[] Feb 12 13:42:37.029: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-5689 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 13:42:37.029: INFO: >>> kubeConfig: /root/.kube/config I0212 13:42:37.099627 8 log.go:172] (0xc0028182c0) (0xc0016694a0) Create stream I0212 13:42:37.099679 8 log.go:172] (0xc0028182c0) (0xc0016694a0) Stream added, broadcasting: 1 I0212 13:42:37.107926 8 log.go:172] (0xc0028182c0) Reply frame received for 1 I0212 13:42:37.108023 8 log.go:172] (0xc0028182c0) (0xc0025de320) Create stream I0212 13:42:37.108037 8 log.go:172] (0xc0028182c0) (0xc0025de320) Stream added, broadcasting: 3 I0212 13:42:37.109646 8 log.go:172] (0xc0028182c0) Reply frame received for 3 I0212 13:42:37.109666 8 log.go:172] (0xc0028182c0) (0xc001e2fa40) Create stream I0212 13:42:37.109671 8 log.go:172] (0xc0028182c0) (0xc001e2fa40) Stream added, broadcasting: 5 I0212 13:42:37.110815 8 log.go:172] (0xc0028182c0) Reply frame received for 5 I0212 13:42:37.220834 8 log.go:172] (0xc0028182c0) Data frame received for 3 I0212 13:42:37.220990 8 log.go:172] (0xc0025de320) (3) Data frame handling I0212 13:42:37.221049 8 log.go:172] (0xc0025de320) (3) Data frame sent I0212 13:42:37.378371 8 log.go:172] (0xc0028182c0) (0xc0025de320) Stream removed, broadcasting: 3 I0212 13:42:37.378650 8 log.go:172] (0xc0028182c0) Data frame received for 1 I0212 13:42:37.378694 8 log.go:172] (0xc0016694a0) (1) Data frame handling I0212 13:42:37.378724 8 log.go:172] (0xc0016694a0) (1) Data frame sent I0212 13:42:37.378757 8 log.go:172] (0xc0028182c0) (0xc0016694a0) Stream removed, broadcasting: 1 I0212 13:42:37.378898 8 log.go:172] (0xc0028182c0) (0xc001e2fa40) Stream removed, broadcasting: 5 I0212 13:42:37.378961 8 log.go:172] (0xc0028182c0) Go away received I0212 13:42:37.379029 8 log.go:172] (0xc0028182c0) (0xc0016694a0) Stream removed, broadcasting: 1 I0212 13:42:37.379047 8 log.go:172] (0xc0028182c0) (0xc0025de320) Stream removed, broadcasting: 3 I0212 13:42:37.379061 8 log.go:172] (0xc0028182c0) (0xc001e2fa40) Stream removed, broadcasting: 5 Feb 12 13:42:37.379: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:42:37.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5689" for this suite. Feb 12 13:43:01.469: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:43:01.590: INFO: namespace pod-network-test-5689 deletion completed in 24.20064005s • [SLOW TEST:63.515 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:43:01.590: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Feb 12 13:43:10.840: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:43:10.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3457" for this suite. Feb 12 13:43:18.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:43:19.070: INFO: namespace container-runtime-3457 deletion completed in 8.143446577s • [SLOW TEST:17.480 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:43:19.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 12 13:43:19.205: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a" in namespace "projected-3598" to be "success or failure" Feb 12 13:43:19.214: INFO: Pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.244646ms Feb 12 13:43:21.223: INFO: Pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017786776s Feb 12 13:43:23.228: INFO: Pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022925437s Feb 12 13:43:25.237: INFO: Pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031853136s Feb 12 13:43:27.245: INFO: Pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.040152209s Feb 12 13:43:29.252: INFO: Pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.047335049s STEP: Saw pod success Feb 12 13:43:29.252: INFO: Pod "downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a" satisfied condition "success or failure" Feb 12 13:43:29.258: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a container client-container: STEP: delete the pod Feb 12 13:43:29.428: INFO: Waiting for pod downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a to disappear Feb 12 13:43:29.437: INFO: Pod downwardapi-volume-ffedab21-cf46-4ec9-b628-8b0c4a05743a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:43:29.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3598" for this suite. Feb 12 13:43:35.509: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:43:35.667: INFO: namespace projected-3598 deletion completed in 6.221724949s • [SLOW TEST:16.596 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:43:35.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Feb 12 13:43:59.921: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 12 13:43:59.934: INFO: Pod pod-with-prestop-http-hook still exists Feb 12 13:44:01.934: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 12 13:44:01.950: INFO: Pod pod-with-prestop-http-hook still exists Feb 12 13:44:03.934: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 12 13:44:03.975: INFO: Pod pod-with-prestop-http-hook still exists Feb 12 13:44:05.934: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Feb 12 13:44:05.944: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:44:05.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6812" for this suite. Feb 12 13:44:28.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:44:28.136: INFO: namespace container-lifecycle-hook-6812 deletion completed in 22.146788953s • [SLOW TEST:52.469 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:44:28.136: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 12 13:44:28.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Feb 12 13:44:28.389: INFO: stderr: "" Feb 12 13:44:28.389: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:44:28.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-396" for this suite. Feb 12 13:44:34.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:44:34.561: INFO: namespace kubectl-396 deletion completed in 6.161243777s • [SLOW TEST:6.425 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:44:34.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Feb 12 13:44:34.635: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:44:49.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2806" for this suite. Feb 12 13:44:55.487: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:44:55.664: INFO: namespace init-container-2806 deletion completed in 6.201743272s • [SLOW TEST:21.103 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:44:55.666: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0212 13:45:06.171457 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 12 13:45:06.171: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:45:06.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6955" for this suite. Feb 12 13:45:12.433: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:45:12.646: INFO: namespace gc-6955 deletion completed in 6.467689331s • [SLOW TEST:16.981 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:45:12.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-59d06856-ce50-4446-9df5-4d13ac161eab STEP: Creating a pod to test consume secrets Feb 12 13:45:12.803: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29" in namespace "projected-5668" to be "success or failure" Feb 12 13:45:12.885: INFO: Pod "pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29": Phase="Pending", Reason="", readiness=false. Elapsed: 81.921256ms Feb 12 13:45:14.898: INFO: Pod "pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095000803s Feb 12 13:45:16.908: INFO: Pod "pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.104328887s Feb 12 13:45:18.918: INFO: Pod "pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114167149s Feb 12 13:45:20.926: INFO: Pod "pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122729628s STEP: Saw pod success Feb 12 13:45:20.926: INFO: Pod "pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29" satisfied condition "success or failure" Feb 12 13:45:20.929: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29 container secret-volume-test: STEP: delete the pod Feb 12 13:45:21.027: INFO: Waiting for pod pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29 to disappear Feb 12 13:45:21.147: INFO: Pod pod-projected-secrets-cd9f57c2-89c4-41a2-9703-b7d7c7d07f29 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:45:21.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5668" for this suite. Feb 12 13:45:27.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:45:27.407: INFO: namespace projected-5668 deletion completed in 6.250170524s • [SLOW TEST:14.759 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:45:27.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Feb 12 13:45:27.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5662' Feb 12 13:45:27.692: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 12 13:45:27.692: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Feb 12 13:45:27.739: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-n95vm] Feb 12 13:45:27.739: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-n95vm" in namespace "kubectl-5662" to be "running and ready" Feb 12 13:45:27.811: INFO: Pod "e2e-test-nginx-rc-n95vm": Phase="Pending", Reason="", readiness=false. Elapsed: 72.400403ms Feb 12 13:45:29.881: INFO: Pod "e2e-test-nginx-rc-n95vm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.142040287s Feb 12 13:45:31.918: INFO: Pod "e2e-test-nginx-rc-n95vm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178909752s Feb 12 13:45:33.936: INFO: Pod "e2e-test-nginx-rc-n95vm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.196856304s Feb 12 13:45:35.945: INFO: Pod "e2e-test-nginx-rc-n95vm": Phase="Running", Reason="", readiness=true. Elapsed: 8.206367343s Feb 12 13:45:35.946: INFO: Pod "e2e-test-nginx-rc-n95vm" satisfied condition "running and ready" Feb 12 13:45:35.946: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-n95vm] Feb 12 13:45:35.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-5662' Feb 12 13:45:36.144: INFO: stderr: "" Feb 12 13:45:36.144: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Feb 12 13:45:36.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5662' Feb 12 13:45:36.253: INFO: stderr: "" Feb 12 13:45:36.254: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:45:36.254: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5662" for this suite. Feb 12 13:46:00.285: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:46:00.408: INFO: namespace kubectl-5662 deletion completed in 24.151161308s • [SLOW TEST:33.002 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:46:00.409: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-835819a4-445a-42a7-984b-6674928af6a9 STEP: Creating secret with name s-test-opt-upd-f4b0ad67-7ccf-4468-a90c-e109cec892c6 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-835819a4-445a-42a7-984b-6674928af6a9 STEP: Updating secret s-test-opt-upd-f4b0ad67-7ccf-4468-a90c-e109cec892c6 STEP: Creating secret with name s-test-opt-create-4e3095a0-a928-4004-996d-bfe679d45d2f STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:47:22.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6833" for this suite. Feb 12 13:47:44.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:47:44.330: INFO: namespace projected-6833 deletion completed in 22.168556536s • [SLOW TEST:103.921 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:47:44.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-bb523f1b-efa1-4a38-81dd-63a7b8bde530 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-bb523f1b-efa1-4a38-81dd-63a7b8bde530 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:49:08.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6995" for this suite. Feb 12 13:49:30.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:49:30.667: INFO: namespace configmap-6995 deletion completed in 22.151992904s • [SLOW TEST:106.337 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:49:30.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 12 13:49:30.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c" in namespace "projected-7699" to be "success or failure" Feb 12 13:49:30.810: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c": Phase="Pending", Reason="", readiness=false. Elapsed: 23.599521ms Feb 12 13:49:32.824: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036744047s Feb 12 13:49:34.831: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044347367s Feb 12 13:49:36.843: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056000429s Feb 12 13:49:38.864: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.077433159s Feb 12 13:49:40.893: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c": Phase="Running", Reason="", readiness=true. Elapsed: 10.10615198s Feb 12 13:49:43.048: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.261104281s STEP: Saw pod success Feb 12 13:49:43.048: INFO: Pod "downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c" satisfied condition "success or failure" Feb 12 13:49:43.059: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c container client-container: STEP: delete the pod Feb 12 13:49:43.536: INFO: Waiting for pod downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c to disappear Feb 12 13:49:43.542: INFO: Pod downwardapi-volume-4fe1e6d5-c09c-4953-9ece-187dae11203c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:49:43.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7699" for this suite. Feb 12 13:49:49.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:49:49.663: INFO: namespace projected-7699 deletion completed in 6.115518762s • [SLOW TEST:18.995 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:49:49.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-mf7c STEP: Creating a pod to test atomic-volume-subpath Feb 12 13:49:49.808: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-mf7c" in namespace "subpath-6062" to be "success or failure" Feb 12 13:49:49.842: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Pending", Reason="", readiness=false. Elapsed: 34.279785ms Feb 12 13:49:51.853: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045220212s Feb 12 13:49:53.877: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068988568s Feb 12 13:49:55.891: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083725817s Feb 12 13:49:57.919: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.110834491s Feb 12 13:49:59.927: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 10.11888839s Feb 12 13:50:01.936: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 12.128044111s Feb 12 13:50:03.950: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 14.142075195s Feb 12 13:50:05.959: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 16.151156691s Feb 12 13:50:07.966: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 18.158049845s Feb 12 13:50:09.974: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 20.165920994s Feb 12 13:50:11.983: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 22.174969821s Feb 12 13:50:13.990: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 24.182532604s Feb 12 13:50:15.997: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 26.189585481s Feb 12 13:50:18.006: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Running", Reason="", readiness=true. Elapsed: 28.197934515s Feb 12 13:50:20.013: INFO: Pod "pod-subpath-test-configmap-mf7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.204849795s STEP: Saw pod success Feb 12 13:50:20.013: INFO: Pod "pod-subpath-test-configmap-mf7c" satisfied condition "success or failure" Feb 12 13:50:20.017: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-mf7c container test-container-subpath-configmap-mf7c: STEP: delete the pod Feb 12 13:50:20.376: INFO: Waiting for pod pod-subpath-test-configmap-mf7c to disappear Feb 12 13:50:20.394: INFO: Pod pod-subpath-test-configmap-mf7c no longer exists STEP: Deleting pod pod-subpath-test-configmap-mf7c Feb 12 13:50:20.394: INFO: Deleting pod "pod-subpath-test-configmap-mf7c" in namespace "subpath-6062" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:50:20.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6062" for this suite. Feb 12 13:50:26.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:50:26.575: INFO: namespace subpath-6062 deletion completed in 6.168194891s • [SLOW TEST:36.912 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:50:26.576: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-9f59ebdb-e73f-4d93-949a-e46fd206b9d1 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:50:26.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1657" for this suite. Feb 12 13:50:32.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:50:32.897: INFO: namespace secrets-1657 deletion completed in 6.197401015s • [SLOW TEST:6.322 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:50:32.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 12 13:50:33.033: INFO: Waiting up to 5m0s for pod "downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540" in namespace "downward-api-369" to be "success or failure" Feb 12 13:50:33.046: INFO: Pod "downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540": Phase="Pending", Reason="", readiness=false. Elapsed: 13.059184ms Feb 12 13:50:35.060: INFO: Pod "downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026473351s Feb 12 13:50:37.068: INFO: Pod "downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035173123s Feb 12 13:50:39.086: INFO: Pod "downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052625009s Feb 12 13:50:41.094: INFO: Pod "downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060935297s STEP: Saw pod success Feb 12 13:50:41.094: INFO: Pod "downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540" satisfied condition "success or failure" Feb 12 13:50:41.098: INFO: Trying to get logs from node iruya-node pod downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540 container dapi-container: STEP: delete the pod Feb 12 13:50:41.199: INFO: Waiting for pod downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540 to disappear Feb 12 13:50:41.207: INFO: Pod downward-api-bb0ccaab-b009-4b36-901a-522a75b8f540 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:50:41.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-369" for this suite. Feb 12 13:50:47.230: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:50:47.383: INFO: namespace downward-api-369 deletion completed in 6.170217211s • [SLOW TEST:14.485 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:50:47.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Feb 12 13:50:47.616: INFO: Waiting up to 5m0s for pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702" in namespace "downward-api-5938" to be "success or failure" Feb 12 13:50:47.753: INFO: Pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702": Phase="Pending", Reason="", readiness=false. Elapsed: 136.957409ms Feb 12 13:50:49.761: INFO: Pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702": Phase="Pending", Reason="", readiness=false. Elapsed: 2.14477196s Feb 12 13:50:51.769: INFO: Pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152260719s Feb 12 13:50:53.781: INFO: Pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164504031s Feb 12 13:50:55.799: INFO: Pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702": Phase="Pending", Reason="", readiness=false. Elapsed: 8.182756223s Feb 12 13:50:57.809: INFO: Pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.192959602s STEP: Saw pod success Feb 12 13:50:57.810: INFO: Pod "downward-api-8c858265-d791-4aea-9b93-6d059d906702" satisfied condition "success or failure" Feb 12 13:50:57.818: INFO: Trying to get logs from node iruya-node pod downward-api-8c858265-d791-4aea-9b93-6d059d906702 container dapi-container: STEP: delete the pod Feb 12 13:50:58.094: INFO: Waiting for pod downward-api-8c858265-d791-4aea-9b93-6d059d906702 to disappear Feb 12 13:50:58.116: INFO: Pod downward-api-8c858265-d791-4aea-9b93-6d059d906702 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:50:58.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5938" for this suite. Feb 12 13:51:04.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:51:04.439: INFO: namespace downward-api-5938 deletion completed in 6.223950304s • [SLOW TEST:17.056 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:51:04.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Feb 12 13:51:04.510: INFO: Waiting up to 5m0s for pod "var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25" in namespace "var-expansion-7863" to be "success or failure" Feb 12 13:51:04.525: INFO: Pod "var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25": Phase="Pending", Reason="", readiness=false. Elapsed: 14.849904ms Feb 12 13:51:06.536: INFO: Pod "var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026174776s Feb 12 13:51:08.547: INFO: Pod "var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037303832s Feb 12 13:51:10.558: INFO: Pod "var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047540888s Feb 12 13:51:12.580: INFO: Pod "var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069875357s STEP: Saw pod success Feb 12 13:51:12.580: INFO: Pod "var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25" satisfied condition "success or failure" Feb 12 13:51:12.592: INFO: Trying to get logs from node iruya-node pod var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25 container dapi-container: STEP: delete the pod Feb 12 13:51:12.755: INFO: Waiting for pod var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25 to disappear Feb 12 13:51:12.760: INFO: Pod var-expansion-1ee9314c-ea27-4e5c-83d2-09d64997df25 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:51:12.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7863" for this suite. Feb 12 13:51:18.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:51:19.009: INFO: namespace var-expansion-7863 deletion completed in 6.241668632s • [SLOW TEST:14.569 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:51:19.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Feb 12 13:51:19.158: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:51:19.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-419" for this suite. Feb 12 13:51:25.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:51:25.462: INFO: namespace kubectl-419 deletion completed in 6.16765218s • [SLOW TEST:6.453 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:51:25.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-762b1a5b-5659-4bc3-a50b-0e0f24bd068d STEP: Creating secret with name secret-projected-all-test-volume-cb3f00da-6027-44a2-b223-5ef909a6c7cf STEP: Creating a pod to test Check all projections for projected volume plugin Feb 12 13:51:25.594: INFO: Waiting up to 5m0s for pod "projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61" in namespace "projected-4504" to be "success or failure" Feb 12 13:51:25.599: INFO: Pod "projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61": Phase="Pending", Reason="", readiness=false. Elapsed: 5.481183ms Feb 12 13:51:27.609: INFO: Pod "projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015466961s Feb 12 13:51:29.618: INFO: Pod "projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023754027s Feb 12 13:51:31.625: INFO: Pod "projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031300275s Feb 12 13:51:33.641: INFO: Pod "projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047600201s STEP: Saw pod success Feb 12 13:51:33.642: INFO: Pod "projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61" satisfied condition "success or failure" Feb 12 13:51:33.647: INFO: Trying to get logs from node iruya-node pod projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61 container projected-all-volume-test: STEP: delete the pod Feb 12 13:51:33.763: INFO: Waiting for pod projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61 to disappear Feb 12 13:51:33.767: INFO: Pod projected-volume-55f7f09a-60cd-484a-b5bd-a2ae234d0f61 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:51:33.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4504" for this suite. Feb 12 13:51:39.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:51:39.998: INFO: namespace projected-4504 deletion completed in 6.217135535s • [SLOW TEST:14.536 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:51:39.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Feb 12 13:51:40.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7280' Feb 12 13:51:42.562: INFO: stderr: "" Feb 12 13:51:42.562: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Feb 12 13:51:43.635: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:51:43.636: INFO: Found 0 / 1 Feb 12 13:51:44.579: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:51:44.579: INFO: Found 0 / 1 Feb 12 13:51:45.573: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:51:45.573: INFO: Found 0 / 1 Feb 12 13:51:46.580: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:51:46.580: INFO: Found 0 / 1 Feb 12 13:51:47.770: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:51:47.770: INFO: Found 0 / 1 Feb 12 13:51:48.673: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:51:48.674: INFO: Found 0 / 1 Feb 12 13:51:49.696: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:51:49.696: INFO: Found 0 / 1 Feb 12 13:51:50.585: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:51:50.585: INFO: Found 0 / 1 Feb 12 13:51:51.589: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:51:51.589: INFO: Found 0 / 1 Feb 12 13:51:52.582: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:51:52.583: INFO: Found 1 / 1 Feb 12 13:51:52.583: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Feb 12 13:51:52.588: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:51:52.588: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Feb 12 13:51:52.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-6pztl --namespace=kubectl-7280 -p {"metadata":{"annotations":{"x":"y"}}}' Feb 12 13:51:52.777: INFO: stderr: "" Feb 12 13:51:52.777: INFO: stdout: "pod/redis-master-6pztl patched\n" STEP: checking annotations Feb 12 13:51:52.792: INFO: Selector matched 1 pods for map[app:redis] Feb 12 13:51:52.792: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:51:52.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7280" for this suite. Feb 12 13:52:14.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:52:15.053: INFO: namespace kubectl-7280 deletion completed in 22.256850467s • [SLOW TEST:35.055 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:52:15.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 12 13:52:15.148: INFO: Waiting up to 5m0s for pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54" in namespace "downward-api-90" to be "success or failure" Feb 12 13:52:15.153: INFO: Pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08489ms Feb 12 13:52:17.161: INFO: Pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012091085s Feb 12 13:52:19.167: INFO: Pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54": Phase="Pending", Reason="", readiness=false. Elapsed: 4.018916837s Feb 12 13:52:21.256: INFO: Pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107654063s Feb 12 13:52:23.266: INFO: Pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54": Phase="Pending", Reason="", readiness=false. Elapsed: 8.117929524s Feb 12 13:52:25.274: INFO: Pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.125818438s STEP: Saw pod success Feb 12 13:52:25.274: INFO: Pod "downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54" satisfied condition "success or failure" Feb 12 13:52:25.279: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54 container client-container: STEP: delete the pod Feb 12 13:52:25.354: INFO: Waiting for pod downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54 to disappear Feb 12 13:52:25.358: INFO: Pod downwardapi-volume-111ede7b-b818-44d1-8584-7b2ea3a9aa54 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:52:25.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-90" for this suite. Feb 12 13:52:31.380: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:52:31.505: INFO: namespace downward-api-90 deletion completed in 6.139866799s • [SLOW TEST:16.452 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:52:31.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-240e239b-148f-491d-adec-7f784f344aac in namespace container-probe-901 Feb 12 13:52:41.694: INFO: Started pod test-webserver-240e239b-148f-491d-adec-7f784f344aac in namespace container-probe-901 STEP: checking the pod's current state and verifying that restartCount is present Feb 12 13:52:41.698: INFO: Initial restart count of pod test-webserver-240e239b-148f-491d-adec-7f784f344aac is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:56:43.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-901" for this suite. Feb 12 13:56:49.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:56:49.398: INFO: namespace container-probe-901 deletion completed in 6.186403543s • [SLOW TEST:257.893 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:56:49.399: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5760.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5760.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 12 13:57:01.553: INFO: File wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod dns-5760/dns-test-f2d7e5a6-6ff7-4e05-86f1-9a1091528059 contains '' instead of 'foo.example.com.' Feb 12 13:57:01.560: INFO: File jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod dns-5760/dns-test-f2d7e5a6-6ff7-4e05-86f1-9a1091528059 contains '' instead of 'foo.example.com.' Feb 12 13:57:01.560: INFO: Lookups using dns-5760/dns-test-f2d7e5a6-6ff7-4e05-86f1-9a1091528059 failed for: [wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local] Feb 12 13:57:06.591: INFO: DNS probes using dns-test-f2d7e5a6-6ff7-4e05-86f1-9a1091528059 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5760.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5760.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 12 13:57:20.753: INFO: File wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains '' instead of 'bar.example.com.' Feb 12 13:57:20.772: INFO: File jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains '' instead of 'bar.example.com.' Feb 12 13:57:20.772: INFO: Lookups using dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c failed for: [wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local] Feb 12 13:57:25.785: INFO: File wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 12 13:57:25.791: INFO: File jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 12 13:57:25.791: INFO: Lookups using dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c failed for: [wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local] Feb 12 13:57:30.785: INFO: File wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 12 13:57:30.791: INFO: File jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 12 13:57:30.791: INFO: Lookups using dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c failed for: [wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local] Feb 12 13:57:35.791: INFO: File wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 12 13:57:35.815: INFO: File jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c contains 'foo.example.com. ' instead of 'bar.example.com.' Feb 12 13:57:35.815: INFO: Lookups using dns-5760/dns-test-8f74d900-4247-4129-8741-9aca1e18c22c failed for: [wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local] Feb 12 13:57:40.834: INFO: DNS probes using dns-test-8f74d900-4247-4129-8741-9aca1e18c22c succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5760.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5760.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 12 13:57:57.187: INFO: File wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod dns-5760/dns-test-dea15795-2d66-4818-8ea6-62388e2b5d8f contains '' instead of '10.96.91.32' Feb 12 13:57:57.193: INFO: File jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local from pod dns-5760/dns-test-dea15795-2d66-4818-8ea6-62388e2b5d8f contains '' instead of '10.96.91.32' Feb 12 13:57:57.193: INFO: Lookups using dns-5760/dns-test-dea15795-2d66-4818-8ea6-62388e2b5d8f failed for: [wheezy_udp@dns-test-service-3.dns-5760.svc.cluster.local jessie_udp@dns-test-service-3.dns-5760.svc.cluster.local] Feb 12 13:58:02.218: INFO: DNS probes using dns-test-dea15795-2d66-4818-8ea6-62388e2b5d8f succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:58:02.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5760" for this suite. Feb 12 13:58:10.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:58:10.790: INFO: namespace dns-5760 deletion completed in 8.17390265s • [SLOW TEST:81.391 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:58:10.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 12 13:58:10.876: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93" in namespace "downward-api-9591" to be "success or failure" Feb 12 13:58:10.968: INFO: Pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93": Phase="Pending", Reason="", readiness=false. Elapsed: 91.489541ms Feb 12 13:58:12.980: INFO: Pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102983527s Feb 12 13:58:14.987: INFO: Pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110533113s Feb 12 13:58:16.997: INFO: Pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119806737s Feb 12 13:58:19.035: INFO: Pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158573316s Feb 12 13:58:21.047: INFO: Pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.169777233s STEP: Saw pod success Feb 12 13:58:21.047: INFO: Pod "downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93" satisfied condition "success or failure" Feb 12 13:58:21.051: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93 container client-container: STEP: delete the pod Feb 12 13:58:21.178: INFO: Waiting for pod downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93 to disappear Feb 12 13:58:21.184: INFO: Pod downwardapi-volume-9afd7717-6cb3-4b98-85f7-fee05b1efd93 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:58:21.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9591" for this suite. Feb 12 13:58:27.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:58:27.406: INFO: namespace downward-api-9591 deletion completed in 6.218019961s • [SLOW TEST:16.615 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:58:27.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-b700bb16-733d-45da-b886-e84292ec4e35 STEP: Creating a pod to test consume configMaps Feb 12 13:58:27.610: INFO: Waiting up to 5m0s for pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a" in namespace "configmap-7104" to be "success or failure" Feb 12 13:58:27.627: INFO: Pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a": Phase="Pending", Reason="", readiness=false. Elapsed: 17.198846ms Feb 12 13:58:29.635: INFO: Pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025044326s Feb 12 13:58:31.650: INFO: Pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039880852s Feb 12 13:58:33.660: INFO: Pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050015817s Feb 12 13:58:35.668: INFO: Pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05786489s Feb 12 13:58:37.677: INFO: Pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066893598s STEP: Saw pod success Feb 12 13:58:37.677: INFO: Pod "pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a" satisfied condition "success or failure" Feb 12 13:58:37.683: INFO: Trying to get logs from node iruya-node pod pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a container configmap-volume-test: STEP: delete the pod Feb 12 13:58:37.780: INFO: Waiting for pod pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a to disappear Feb 12 13:58:37.789: INFO: Pod pod-configmaps-9e8236ab-5ca1-40b7-9d28-5507e156555a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:58:37.790: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7104" for this suite. Feb 12 13:58:43.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:58:44.027: INFO: namespace configmap-7104 deletion completed in 6.221018317s • [SLOW TEST:16.621 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:58:44.028: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Feb 12 13:58:44.093: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix184574266/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:58:44.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3820" for this suite. Feb 12 13:58:50.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:58:50.306: INFO: namespace kubectl-3820 deletion completed in 6.144523474s • [SLOW TEST:6.278 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:58:50.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Feb 12 13:59:02.997: INFO: Successfully updated pod "annotationupdate3eaeacd5-37e5-4a75-a6f5-5b67cbc730fa" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 13:59:04.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3445" for this suite. Feb 12 13:59:43.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 13:59:43.210: INFO: namespace projected-3445 deletion completed in 38.181953183s • [SLOW TEST:52.903 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 13:59:43.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Feb 12 13:59:44.209: INFO: Pod name wrapped-volume-race-41fca42d-ed81-4efa-8664-7d3cbc57738c: Found 0 pods out of 5 Feb 12 13:59:49.275: INFO: Pod name wrapped-volume-race-41fca42d-ed81-4efa-8664-7d3cbc57738c: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-41fca42d-ed81-4efa-8664-7d3cbc57738c in namespace emptydir-wrapper-9795, will wait for the garbage collector to delete the pods Feb 12 14:00:17.396: INFO: Deleting ReplicationController wrapped-volume-race-41fca42d-ed81-4efa-8664-7d3cbc57738c took: 16.108205ms Feb 12 14:00:17.797: INFO: Terminating ReplicationController wrapped-volume-race-41fca42d-ed81-4efa-8664-7d3cbc57738c pods took: 400.481672ms STEP: Creating RC which spawns configmap-volume pods Feb 12 14:01:07.092: INFO: Pod name wrapped-volume-race-4e1ad303-8053-40f3-a398-badeb65c3e05: Found 0 pods out of 5 Feb 12 14:01:12.104: INFO: Pod name wrapped-volume-race-4e1ad303-8053-40f3-a398-badeb65c3e05: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4e1ad303-8053-40f3-a398-badeb65c3e05 in namespace emptydir-wrapper-9795, will wait for the garbage collector to delete the pods Feb 12 14:01:44.218: INFO: Deleting ReplicationController wrapped-volume-race-4e1ad303-8053-40f3-a398-badeb65c3e05 took: 16.073488ms Feb 12 14:01:44.618: INFO: Terminating ReplicationController wrapped-volume-race-4e1ad303-8053-40f3-a398-badeb65c3e05 pods took: 400.524067ms STEP: Creating RC which spawns configmap-volume pods Feb 12 14:02:37.061: INFO: Pod name wrapped-volume-race-f6e5884a-ec7f-4f03-b62f-e1ab2848f6a8: Found 0 pods out of 5 Feb 12 14:02:42.079: INFO: Pod name wrapped-volume-race-f6e5884a-ec7f-4f03-b62f-e1ab2848f6a8: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-f6e5884a-ec7f-4f03-b62f-e1ab2848f6a8 in namespace emptydir-wrapper-9795, will wait for the garbage collector to delete the pods Feb 12 14:03:12.193: INFO: Deleting ReplicationController wrapped-volume-race-f6e5884a-ec7f-4f03-b62f-e1ab2848f6a8 took: 17.193738ms Feb 12 14:03:12.594: INFO: Terminating ReplicationController wrapped-volume-race-f6e5884a-ec7f-4f03-b62f-e1ab2848f6a8 pods took: 400.984523ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 14:03:58.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-9795" for this suite. Feb 12 14:04:08.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 14:04:08.560: INFO: namespace emptydir-wrapper-9795 deletion completed in 10.159393556s • [SLOW TEST:265.349 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 14:04:08.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-9dd14e01-4f27-4c3e-bc01-0a152d6962a8 STEP: Creating configMap with name cm-test-opt-upd-2f4fddee-827b-41ad-8046-2e66394bfe31 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-9dd14e01-4f27-4c3e-bc01-0a152d6962a8 STEP: Updating configmap cm-test-opt-upd-2f4fddee-827b-41ad-8046-2e66394bfe31 STEP: Creating configMap with name cm-test-opt-create-58a45cff-f346-4297-9bd0-b5384f3fc436 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 14:05:41.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4543" for this suite. Feb 12 14:06:03.058: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 14:06:03.150: INFO: namespace projected-4543 deletion completed in 22.117921094s • [SLOW TEST:114.590 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 14:06:03.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6326 STEP: creating a selector STEP: Creating the service pods in kubernetes Feb 12 14:06:03.194: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Feb 12 14:06:43.963: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-6326 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 14:06:43.963: INFO: >>> kubeConfig: /root/.kube/config I0212 14:06:44.060288 8 log.go:172] (0xc000ba2a50) (0xc00221dea0) Create stream I0212 14:06:44.060412 8 log.go:172] (0xc000ba2a50) (0xc00221dea0) Stream added, broadcasting: 1 I0212 14:06:44.069784 8 log.go:172] (0xc000ba2a50) Reply frame received for 1 I0212 14:06:44.069841 8 log.go:172] (0xc000ba2a50) (0xc001c75ea0) Create stream I0212 14:06:44.069853 8 log.go:172] (0xc000ba2a50) (0xc001c75ea0) Stream added, broadcasting: 3 I0212 14:06:44.071638 8 log.go:172] (0xc000ba2a50) Reply frame received for 3 I0212 14:06:44.071665 8 log.go:172] (0xc000ba2a50) (0xc00038d7c0) Create stream I0212 14:06:44.071675 8 log.go:172] (0xc000ba2a50) (0xc00038d7c0) Stream added, broadcasting: 5 I0212 14:06:44.073949 8 log.go:172] (0xc000ba2a50) Reply frame received for 5 I0212 14:06:44.267168 8 log.go:172] (0xc000ba2a50) Data frame received for 3 I0212 14:06:44.267215 8 log.go:172] (0xc001c75ea0) (3) Data frame handling I0212 14:06:44.267232 8 log.go:172] (0xc001c75ea0) (3) Data frame sent I0212 14:06:44.412798 8 log.go:172] (0xc000ba2a50) (0xc001c75ea0) Stream removed, broadcasting: 3 I0212 14:06:44.412956 8 log.go:172] (0xc000ba2a50) Data frame received for 1 I0212 14:06:44.413052 8 log.go:172] (0xc00221dea0) (1) Data frame handling I0212 14:06:44.413073 8 log.go:172] (0xc00221dea0) (1) Data frame sent I0212 14:06:44.413332 8 log.go:172] (0xc000ba2a50) (0xc00221dea0) Stream removed, broadcasting: 1 I0212 14:06:44.413372 8 log.go:172] (0xc000ba2a50) (0xc00038d7c0) Stream removed, broadcasting: 5 I0212 14:06:44.413402 8 log.go:172] (0xc000ba2a50) Go away received I0212 14:06:44.413521 8 log.go:172] (0xc000ba2a50) (0xc00221dea0) Stream removed, broadcasting: 1 I0212 14:06:44.413537 8 log.go:172] (0xc000ba2a50) (0xc001c75ea0) Stream removed, broadcasting: 3 I0212 14:06:44.413547 8 log.go:172] (0xc000ba2a50) (0xc00038d7c0) Stream removed, broadcasting: 5 Feb 12 14:06:44.413: INFO: Waiting for endpoints: map[] Feb 12 14:06:44.420: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-6326 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Feb 12 14:06:44.420: INFO: >>> kubeConfig: /root/.kube/config I0212 14:06:44.501801 8 log.go:172] (0xc001b92160) (0xc00038dc20) Create stream I0212 14:06:44.502031 8 log.go:172] (0xc001b92160) (0xc00038dc20) Stream added, broadcasting: 1 I0212 14:06:44.513956 8 log.go:172] (0xc001b92160) Reply frame received for 1 I0212 14:06:44.514019 8 log.go:172] (0xc001b92160) (0xc002922280) Create stream I0212 14:06:44.514028 8 log.go:172] (0xc001b92160) (0xc002922280) Stream added, broadcasting: 3 I0212 14:06:44.515797 8 log.go:172] (0xc001b92160) Reply frame received for 3 I0212 14:06:44.515830 8 log.go:172] (0xc001b92160) (0xc0027580a0) Create stream I0212 14:06:44.515837 8 log.go:172] (0xc001b92160) (0xc0027580a0) Stream added, broadcasting: 5 I0212 14:06:44.521168 8 log.go:172] (0xc001b92160) Reply frame received for 5 I0212 14:06:44.684897 8 log.go:172] (0xc001b92160) Data frame received for 3 I0212 14:06:44.684969 8 log.go:172] (0xc002922280) (3) Data frame handling I0212 14:06:44.684989 8 log.go:172] (0xc002922280) (3) Data frame sent I0212 14:06:44.856683 8 log.go:172] (0xc001b92160) (0xc002922280) Stream removed, broadcasting: 3 I0212 14:06:44.856899 8 log.go:172] (0xc001b92160) Data frame received for 1 I0212 14:06:44.856963 8 log.go:172] (0xc00038dc20) (1) Data frame handling I0212 14:06:44.856980 8 log.go:172] (0xc00038dc20) (1) Data frame sent I0212 14:06:44.856986 8 log.go:172] (0xc001b92160) (0xc00038dc20) Stream removed, broadcasting: 1 I0212 14:06:44.857787 8 log.go:172] (0xc001b92160) (0xc0027580a0) Stream removed, broadcasting: 5 I0212 14:06:44.857871 8 log.go:172] (0xc001b92160) Go away received I0212 14:06:44.858050 8 log.go:172] (0xc001b92160) (0xc00038dc20) Stream removed, broadcasting: 1 I0212 14:06:44.858145 8 log.go:172] (0xc001b92160) (0xc002922280) Stream removed, broadcasting: 3 I0212 14:06:44.858156 8 log.go:172] (0xc001b92160) (0xc0027580a0) Stream removed, broadcasting: 5 Feb 12 14:06:44.858: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 14:06:44.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6326" for this suite. Feb 12 14:07:08.922: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 14:07:09.063: INFO: namespace pod-network-test-6326 deletion completed in 24.182572057s • [SLOW TEST:65.913 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 14:07:09.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 12 14:07:09.232: INFO: Waiting up to 5m0s for pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690" in namespace "emptydir-4800" to be "success or failure" Feb 12 14:07:09.245: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690": Phase="Pending", Reason="", readiness=false. Elapsed: 12.557212ms Feb 12 14:07:11.250: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017844703s Feb 12 14:07:13.260: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027823709s Feb 12 14:07:15.268: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035781002s Feb 12 14:07:17.277: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690": Phase="Pending", Reason="", readiness=false. Elapsed: 8.044863646s Feb 12 14:07:19.289: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690": Phase="Pending", Reason="", readiness=false. Elapsed: 10.057399308s Feb 12 14:07:21.295: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.062556171s STEP: Saw pod success Feb 12 14:07:21.295: INFO: Pod "pod-e397c2a1-da5d-4dba-b224-f0240cb59690" satisfied condition "success or failure" Feb 12 14:07:21.299: INFO: Trying to get logs from node iruya-node pod pod-e397c2a1-da5d-4dba-b224-f0240cb59690 container test-container: STEP: delete the pod Feb 12 14:07:21.347: INFO: Waiting for pod pod-e397c2a1-da5d-4dba-b224-f0240cb59690 to disappear Feb 12 14:07:21.351: INFO: Pod pod-e397c2a1-da5d-4dba-b224-f0240cb59690 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 14:07:21.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4800" for this suite. Feb 12 14:07:27.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 14:07:27.553: INFO: namespace emptydir-4800 deletion completed in 6.167535985s • [SLOW TEST:18.488 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 14:07:27.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Feb 12 14:07:27.647: INFO: Creating ReplicaSet my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f Feb 12 14:07:27.710: INFO: Pod name my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f: Found 0 pods out of 1 Feb 12 14:07:32.768: INFO: Pod name my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f: Found 1 pods out of 1 Feb 12 14:07:32.768: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f" is running Feb 12 14:07:38.786: INFO: Pod "my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f-dhtd9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 14:07:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 14:07:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 14:07:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-12 14:07:27 +0000 UTC Reason: Message:}]) Feb 12 14:07:38.786: INFO: Trying to dial the pod Feb 12 14:07:43.833: INFO: Controller my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f: Got expected result from replica 1 [my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f-dhtd9]: "my-hostname-basic-ad932733-815d-43cd-8f2b-26d69809d96f-dhtd9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 14:07:43.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5305" for this suite. Feb 12 14:07:49.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 14:07:49.978: INFO: namespace replicaset-5305 deletion completed in 6.137226436s • [SLOW TEST:22.425 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 14:07:49.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-1bd1303b-9e30-496f-b6fe-7b98df7bd9cd STEP: Creating a pod to test consume configMaps Feb 12 14:07:50.108: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad" in namespace "projected-1257" to be "success or failure" Feb 12 14:07:50.136: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 28.13871ms Feb 12 14:07:52.152: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.043640606s Feb 12 14:07:54.166: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058185551s Feb 12 14:07:56.179: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070887199s Feb 12 14:07:58.186: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 8.07823751s Feb 12 14:08:00.196: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 10.087424797s Feb 12 14:08:02.207: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Pending", Reason="", readiness=false. Elapsed: 12.098895869s Feb 12 14:08:04.218: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.109503084s STEP: Saw pod success Feb 12 14:08:04.218: INFO: Pod "pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad" satisfied condition "success or failure" Feb 12 14:08:04.222: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad container projected-configmap-volume-test: STEP: delete the pod Feb 12 14:08:04.280: INFO: Waiting for pod pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad to disappear Feb 12 14:08:04.388: INFO: Pod pod-projected-configmaps-01147d64-63d4-44e7-bb26-73421d86c4ad no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 14:08:04.388: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1257" for this suite. Feb 12 14:08:10.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 14:08:10.576: INFO: namespace projected-1257 deletion completed in 6.181262471s • [SLOW TEST:20.597 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 14:08:10.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3063.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3063.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3063.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3063.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3063.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3063.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 12 14:08:24.789: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea: the server could not find the requested resource (get pods dns-test-abfa7d64-7c86-4214-8610-98ba292092ea) Feb 12 14:08:24.794: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea: the server could not find the requested resource (get pods dns-test-abfa7d64-7c86-4214-8610-98ba292092ea) Feb 12 14:08:24.801: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-3063.svc.cluster.local from pod dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea: the server could not find the requested resource (get pods dns-test-abfa7d64-7c86-4214-8610-98ba292092ea) Feb 12 14:08:24.807: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea: the server could not find the requested resource (get pods dns-test-abfa7d64-7c86-4214-8610-98ba292092ea) Feb 12 14:08:24.811: INFO: Unable to read jessie_udp@PodARecord from pod dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea: the server could not find the requested resource (get pods dns-test-abfa7d64-7c86-4214-8610-98ba292092ea) Feb 12 14:08:24.814: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea: the server could not find the requested resource (get pods dns-test-abfa7d64-7c86-4214-8610-98ba292092ea) Feb 12 14:08:24.814: INFO: Lookups using dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-3063.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord] Feb 12 14:08:29.904: INFO: DNS probes using dns-3063/dns-test-abfa7d64-7c86-4214-8610-98ba292092ea succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 14:08:30.003: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3063" for this suite. Feb 12 14:08:36.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 14:08:36.231: INFO: namespace dns-3063 deletion completed in 6.186708519s • [SLOW TEST:25.653 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 14:08:36.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Feb 12 14:08:45.076: INFO: Successfully updated pod "pod-update-236f9a43-94df-4b70-8131-cf45e73ab8c7" STEP: verifying the updated pod is in kubernetes Feb 12 14:08:45.089: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 14:08:45.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2115" for this suite. Feb 12 14:09:07.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 14:09:07.224: INFO: namespace pods-2115 deletion completed in 22.128779672s • [SLOW TEST:30.993 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 14:09:07.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Feb 12 14:09:07.357: INFO: Waiting up to 5m0s for pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba" in namespace "emptydir-1479" to be "success or failure" Feb 12 14:09:07.419: INFO: Pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba": Phase="Pending", Reason="", readiness=false. Elapsed: 62.117252ms Feb 12 14:09:09.426: INFO: Pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068904219s Feb 12 14:09:11.432: INFO: Pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074945814s Feb 12 14:09:13.442: INFO: Pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.084761752s Feb 12 14:09:15.448: INFO: Pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.090386294s Feb 12 14:09:17.456: INFO: Pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098305279s STEP: Saw pod success Feb 12 14:09:17.456: INFO: Pod "pod-106587a8-9a85-485f-a7e3-3656e7bf60ba" satisfied condition "success or failure" Feb 12 14:09:17.459: INFO: Trying to get logs from node iruya-node pod pod-106587a8-9a85-485f-a7e3-3656e7bf60ba container test-container: STEP: delete the pod Feb 12 14:09:17.512: INFO: Waiting for pod pod-106587a8-9a85-485f-a7e3-3656e7bf60ba to disappear Feb 12 14:09:17.521: INFO: Pod pod-106587a8-9a85-485f-a7e3-3656e7bf60ba no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 14:09:17.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1479" for this suite. Feb 12 14:09:23.571: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 14:09:23.684: INFO: namespace emptydir-1479 deletion completed in 6.15457404s • [SLOW TEST:16.460 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 14:09:23.686: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Feb 12 14:09:23.802: INFO: Waiting up to 5m0s for pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda" in namespace "downward-api-4425" to be "success or failure" Feb 12 14:09:23.852: INFO: Pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda": Phase="Pending", Reason="", readiness=false. Elapsed: 49.78984ms Feb 12 14:09:25.863: INFO: Pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061041357s Feb 12 14:09:27.873: INFO: Pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071177887s Feb 12 14:09:29.880: INFO: Pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078162959s Feb 12 14:09:31.891: INFO: Pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088734732s Feb 12 14:09:33.905: INFO: Pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.103294383s STEP: Saw pod success Feb 12 14:09:33.905: INFO: Pod "downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda" satisfied condition "success or failure" Feb 12 14:09:33.912: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda container client-container: STEP: delete the pod Feb 12 14:09:34.127: INFO: Waiting for pod downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda to disappear Feb 12 14:09:34.133: INFO: Pod downwardapi-volume-66a4c0d2-a770-440e-b65d-ee6b8b068cda no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 14:09:34.133: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4425" for this suite. Feb 12 14:09:40.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 14:09:40.285: INFO: namespace downward-api-4425 deletion completed in 6.146289617s • [SLOW TEST:16.599 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 14:09:40.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Feb 12 14:10:04.521: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 12 14:10:04.535: INFO: Pod pod-with-poststart-http-hook still exists Feb 12 14:10:06.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 12 14:10:06.550: INFO: Pod pod-with-poststart-http-hook still exists Feb 12 14:10:08.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 12 14:10:08.548: INFO: Pod pod-with-poststart-http-hook still exists Feb 12 14:10:10.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 12 14:10:10.565: INFO: Pod pod-with-poststart-http-hook still exists Feb 12 14:10:12.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 12 14:10:12.560: INFO: Pod pod-with-poststart-http-hook still exists Feb 12 14:10:14.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 12 14:10:14.551: INFO: Pod pod-with-poststart-http-hook still exists Feb 12 14:10:16.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 12 14:10:16.564: INFO: Pod pod-with-poststart-http-hook still exists Feb 12 14:10:18.536: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Feb 12 14:10:19.030: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Feb 12 14:10:19.030: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-661" for this suite. Feb 12 14:10:41.112: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Feb 12 14:10:41.223: INFO: namespace container-lifecycle-hook-661 deletion completed in 22.169929042s • [SLOW TEST:60.937 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Feb 12 14:10:41.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-961 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-961 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-961 Feb 12 14:10:41.369: INFO: Found 0 stateful pods, waiting for 1 Feb 12 14:10:51.385: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Feb 12 14:11:03.097: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 12 14:11:03.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 12 14:11:05.708: INFO: stderr: "I0212 14:11:05.300493 2096 log.go:172] (0xc00013a6e0) (0xc00002e6e0) Create stream\nI0212 14:11:05.300687 2096 log.go:172] (0xc00013a6e0) (0xc00002e6e0) Stream added, broadcasting: 1\nI0212 14:11:05.312261 2096 log.go:172] (0xc00013a6e0) Reply frame received for 1\nI0212 14:11:05.312468 2096 log.go:172] (0xc00013a6e0) (0xc00071c000) Create stream\nI0212 14:11:05.312519 2096 log.go:172] (0xc00013a6e0) (0xc00071c000) Stream added, broadcasting: 3\nI0212 14:11:05.315493 2096 log.go:172] (0xc00013a6e0) Reply frame received for 3\nI0212 14:11:05.315551 2096 log.go:172] (0xc00013a6e0) (0xc000612280) Create stream\nI0212 14:11:05.315571 2096 log.go:172] (0xc00013a6e0) (0xc000612280) Stream added, broadcasting: 5\nI0212 14:11:05.317707 2096 log.go:172] (0xc00013a6e0) Reply frame received for 5\nI0212 14:11:05.521360 2096 log.go:172] (0xc00013a6e0) Data frame received for 5\nI0212 14:11:05.521845 2096 log.go:172] (0xc000612280) (5) Data frame handling\nI0212 14:11:05.521910 2096 log.go:172] (0xc000612280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 14:11:05.568728 2096 log.go:172] (0xc00013a6e0) Data frame received for 3\nI0212 14:11:05.568889 2096 log.go:172] (0xc00071c000) (3) Data frame handling\nI0212 14:11:05.568917 2096 log.go:172] (0xc00071c000) (3) Data frame sent\nI0212 14:11:05.694138 2096 log.go:172] (0xc00013a6e0) Data frame received for 1\nI0212 14:11:05.694293 2096 log.go:172] (0xc00013a6e0) (0xc00071c000) Stream removed, broadcasting: 3\nI0212 14:11:05.694383 2096 log.go:172] (0xc00002e6e0) (1) Data frame handling\nI0212 14:11:05.694425 2096 log.go:172] (0xc00002e6e0) (1) Data frame sent\nI0212 14:11:05.694803 2096 log.go:172] (0xc00013a6e0) (0xc000612280) Stream removed, broadcasting: 5\nI0212 14:11:05.694922 2096 log.go:172] (0xc00013a6e0) (0xc00002e6e0) Stream removed, broadcasting: 1\nI0212 14:11:05.694970 2096 log.go:172] (0xc00013a6e0) Go away received\nI0212 14:11:05.696655 2096 log.go:172] (0xc00013a6e0) (0xc00002e6e0) Stream removed, broadcasting: 1\nI0212 14:11:05.696679 2096 log.go:172] (0xc00013a6e0) (0xc00071c000) Stream removed, broadcasting: 3\nI0212 14:11:05.696696 2096 log.go:172] (0xc00013a6e0) (0xc000612280) Stream removed, broadcasting: 5\n" Feb 12 14:11:05.708: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 12 14:11:05.708: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 12 14:11:05.717: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 12 14:11:15.726: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 12 14:11:15.726: INFO: Waiting for statefulset status.replicas updated to 0 Feb 12 14:11:15.756: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 14:11:15.756: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:06 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC }] Feb 12 14:11:15.756: INFO: Feb 12 14:11:15.756: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 12 14:11:17.319: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.984265376s Feb 12 14:11:18.339: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.421111993s Feb 12 14:11:19.448: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.401023816s Feb 12 14:11:20.458: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.29166103s Feb 12 14:11:23.389: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.28183694s Feb 12 14:11:24.995: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.351032021s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-961 Feb 12 14:11:26.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:11:26.815: INFO: stderr: "I0212 14:11:26.231632 2122 log.go:172] (0xc0007920b0) (0xc000696640) Create stream\nI0212 14:11:26.232059 2122 log.go:172] (0xc0007920b0) (0xc000696640) Stream added, broadcasting: 1\nI0212 14:11:26.242474 2122 log.go:172] (0xc0007920b0) Reply frame received for 1\nI0212 14:11:26.242607 2122 log.go:172] (0xc0007920b0) (0xc0006001e0) Create stream\nI0212 14:11:26.242636 2122 log.go:172] (0xc0007920b0) (0xc0006001e0) Stream added, broadcasting: 3\nI0212 14:11:26.244867 2122 log.go:172] (0xc0007920b0) Reply frame received for 3\nI0212 14:11:26.244905 2122 log.go:172] (0xc0007920b0) (0xc00071c000) Create stream\nI0212 14:11:26.244917 2122 log.go:172] (0xc0007920b0) (0xc00071c000) Stream added, broadcasting: 5\nI0212 14:11:26.247102 2122 log.go:172] (0xc0007920b0) Reply frame received for 5\nI0212 14:11:26.609842 2122 log.go:172] (0xc0007920b0) Data frame received for 5\nI0212 14:11:26.610217 2122 log.go:172] (0xc00071c000) (5) Data frame handling\nI0212 14:11:26.610271 2122 log.go:172] (0xc00071c000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0212 14:11:26.610322 2122 log.go:172] (0xc0007920b0) Data frame received for 3\nI0212 14:11:26.610334 2122 log.go:172] (0xc0006001e0) (3) Data frame handling\nI0212 14:11:26.610353 2122 log.go:172] (0xc0006001e0) (3) Data frame sent\nI0212 14:11:26.806058 2122 log.go:172] (0xc0007920b0) (0xc0006001e0) Stream removed, broadcasting: 3\nI0212 14:11:26.806218 2122 log.go:172] (0xc0007920b0) Data frame received for 1\nI0212 14:11:26.806233 2122 log.go:172] (0xc0007920b0) (0xc00071c000) Stream removed, broadcasting: 5\nI0212 14:11:26.806360 2122 log.go:172] (0xc000696640) (1) Data frame handling\nI0212 14:11:26.806396 2122 log.go:172] (0xc000696640) (1) Data frame sent\nI0212 14:11:26.806409 2122 log.go:172] (0xc0007920b0) (0xc000696640) Stream removed, broadcasting: 1\nI0212 14:11:26.806432 2122 log.go:172] (0xc0007920b0) Go away received\nI0212 14:11:26.807401 2122 log.go:172] (0xc0007920b0) (0xc000696640) Stream removed, broadcasting: 1\nI0212 14:11:26.807424 2122 log.go:172] (0xc0007920b0) (0xc0006001e0) Stream removed, broadcasting: 3\nI0212 14:11:26.807433 2122 log.go:172] (0xc0007920b0) (0xc00071c000) Stream removed, broadcasting: 5\n" Feb 12 14:11:26.815: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 12 14:11:26.815: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 12 14:11:26.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:11:27.111: INFO: stderr: "I0212 14:11:26.959479 2137 log.go:172] (0xc0007e8420) (0xc0002f8820) Create stream\nI0212 14:11:26.959672 2137 log.go:172] (0xc0007e8420) (0xc0002f8820) Stream added, broadcasting: 1\nI0212 14:11:26.965601 2137 log.go:172] (0xc0007e8420) Reply frame received for 1\nI0212 14:11:26.965659 2137 log.go:172] (0xc0007e8420) (0xc0007e2000) Create stream\nI0212 14:11:26.965666 2137 log.go:172] (0xc0007e8420) (0xc0007e2000) Stream added, broadcasting: 3\nI0212 14:11:26.966612 2137 log.go:172] (0xc0007e8420) Reply frame received for 3\nI0212 14:11:26.966638 2137 log.go:172] (0xc0007e8420) (0xc0007be000) Create stream\nI0212 14:11:26.966653 2137 log.go:172] (0xc0007e8420) (0xc0007be000) Stream added, broadcasting: 5\nI0212 14:11:26.967503 2137 log.go:172] (0xc0007e8420) Reply frame received for 5\nI0212 14:11:27.027386 2137 log.go:172] (0xc0007e8420) Data frame received for 5\nI0212 14:11:27.027517 2137 log.go:172] (0xc0007be000) (5) Data frame handling\nI0212 14:11:27.027558 2137 log.go:172] (0xc0007be000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0212 14:11:27.027877 2137 log.go:172] (0xc0007e8420) Data frame received for 5\nI0212 14:11:27.027937 2137 log.go:172] (0xc0007be000) (5) Data frame handling\nI0212 14:11:27.027964 2137 log.go:172] (0xc0007be000) (5) Data frame sent\nI0212 14:11:27.027996 2137 log.go:172] (0xc0007e8420) Data frame received for 5\nI0212 14:11:27.028027 2137 log.go:172] (0xc0007be000) (5) Data frame handling\nI0212 14:11:27.028050 2137 log.go:172] (0xc0007e8420) Data frame received for 3\nI0212 14:11:27.028068 2137 log.go:172] (0xc0007e2000) (3) Data frame handling\nI0212 14:11:27.028086 2137 log.go:172] (0xc0007e2000) (3) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0212 14:11:27.028238 2137 log.go:172] (0xc0007be000) (5) Data frame sent\nI0212 14:11:27.102337 2137 log.go:172] (0xc0007e8420) Data frame received for 1\nI0212 14:11:27.102476 2137 log.go:172] (0xc0007e8420) (0xc0007e2000) Stream removed, broadcasting: 3\nI0212 14:11:27.102612 2137 log.go:172] (0xc0002f8820) (1) Data frame handling\nI0212 14:11:27.102632 2137 log.go:172] (0xc0002f8820) (1) Data frame sent\nI0212 14:11:27.102691 2137 log.go:172] (0xc0007e8420) (0xc0007be000) Stream removed, broadcasting: 5\nI0212 14:11:27.102711 2137 log.go:172] (0xc0007e8420) (0xc0002f8820) Stream removed, broadcasting: 1\nI0212 14:11:27.103098 2137 log.go:172] (0xc0007e8420) (0xc0002f8820) Stream removed, broadcasting: 1\nI0212 14:11:27.103161 2137 log.go:172] (0xc0007e8420) (0xc0007e2000) Stream removed, broadcasting: 3\nI0212 14:11:27.103174 2137 log.go:172] (0xc0007e8420) (0xc0007be000) Stream removed, broadcasting: 5\n" Feb 12 14:11:27.112: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 12 14:11:27.112: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 12 14:11:27.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:11:27.737: INFO: stderr: "I0212 14:11:27.469816 2156 log.go:172] (0xc000a46370) (0xc000a0e640) Create stream\nI0212 14:11:27.470166 2156 log.go:172] (0xc000a46370) (0xc000a0e640) Stream added, broadcasting: 1\nI0212 14:11:27.481307 2156 log.go:172] (0xc000a46370) Reply frame received for 1\nI0212 14:11:27.481456 2156 log.go:172] (0xc000a46370) (0xc000924000) Create stream\nI0212 14:11:27.481476 2156 log.go:172] (0xc000a46370) (0xc000924000) Stream added, broadcasting: 3\nI0212 14:11:27.483802 2156 log.go:172] (0xc000a46370) Reply frame received for 3\nI0212 14:11:27.483859 2156 log.go:172] (0xc000a46370) (0xc0005c4320) Create stream\nI0212 14:11:27.483882 2156 log.go:172] (0xc000a46370) (0xc0005c4320) Stream added, broadcasting: 5\nI0212 14:11:27.485025 2156 log.go:172] (0xc000a46370) Reply frame received for 5\nI0212 14:11:27.606456 2156 log.go:172] (0xc000a46370) Data frame received for 5\nI0212 14:11:27.606657 2156 log.go:172] (0xc0005c4320) (5) Data frame handling\nI0212 14:11:27.606687 2156 log.go:172] (0xc0005c4320) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0212 14:11:27.606762 2156 log.go:172] (0xc000a46370) Data frame received for 3\nI0212 14:11:27.606803 2156 log.go:172] (0xc000924000) (3) Data frame handling\nI0212 14:11:27.606821 2156 log.go:172] (0xc000924000) (3) Data frame sent\nI0212 14:11:27.724565 2156 log.go:172] (0xc000a46370) (0xc000924000) Stream removed, broadcasting: 3\nI0212 14:11:27.724959 2156 log.go:172] (0xc000a46370) Data frame received for 1\nI0212 14:11:27.725061 2156 log.go:172] (0xc000a46370) (0xc0005c4320) Stream removed, broadcasting: 5\nI0212 14:11:27.725110 2156 log.go:172] (0xc000a0e640) (1) Data frame handling\nI0212 14:11:27.725125 2156 log.go:172] (0xc000a0e640) (1) Data frame sent\nI0212 14:11:27.725133 2156 log.go:172] (0xc000a46370) (0xc000a0e640) Stream removed, broadcasting: 1\nI0212 14:11:27.725152 2156 log.go:172] (0xc000a46370) Go away received\nI0212 14:11:27.726438 2156 log.go:172] (0xc000a46370) (0xc000a0e640) Stream removed, broadcasting: 1\nI0212 14:11:27.726461 2156 log.go:172] (0xc000a46370) (0xc000924000) Stream removed, broadcasting: 3\nI0212 14:11:27.726469 2156 log.go:172] (0xc000a46370) (0xc0005c4320) Stream removed, broadcasting: 5\n" Feb 12 14:11:27.737: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Feb 12 14:11:27.737: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Feb 12 14:11:27.744: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 12 14:11:27.744: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 12 14:11:27.744: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 12 14:11:27.747: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 12 14:11:28.127: INFO: stderr: "I0212 14:11:27.879916 2178 log.go:172] (0xc0006bca50) (0xc0003ca6e0) Create stream\nI0212 14:11:27.880371 2178 log.go:172] (0xc0006bca50) (0xc0003ca6e0) Stream added, broadcasting: 1\nI0212 14:11:27.892678 2178 log.go:172] (0xc0006bca50) Reply frame received for 1\nI0212 14:11:27.892753 2178 log.go:172] (0xc0006bca50) (0xc000688500) Create stream\nI0212 14:11:27.892766 2178 log.go:172] (0xc0006bca50) (0xc000688500) Stream added, broadcasting: 3\nI0212 14:11:27.895696 2178 log.go:172] (0xc0006bca50) Reply frame received for 3\nI0212 14:11:27.895721 2178 log.go:172] (0xc0006bca50) (0xc0003ca780) Create stream\nI0212 14:11:27.895727 2178 log.go:172] (0xc0006bca50) (0xc0003ca780) Stream added, broadcasting: 5\nI0212 14:11:27.897144 2178 log.go:172] (0xc0006bca50) Reply frame received for 5\nI0212 14:11:27.998581 2178 log.go:172] (0xc0006bca50) Data frame received for 3\nI0212 14:11:27.998847 2178 log.go:172] (0xc000688500) (3) Data frame handling\nI0212 14:11:27.998871 2178 log.go:172] (0xc000688500) (3) Data frame sent\nI0212 14:11:27.998935 2178 log.go:172] (0xc0006bca50) Data frame received for 5\nI0212 14:11:27.998944 2178 log.go:172] (0xc0003ca780) (5) Data frame handling\nI0212 14:11:27.998957 2178 log.go:172] (0xc0003ca780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 14:11:28.115343 2178 log.go:172] (0xc0006bca50) Data frame received for 1\nI0212 14:11:28.115543 2178 log.go:172] (0xc0006bca50) (0xc0003ca780) Stream removed, broadcasting: 5\nI0212 14:11:28.115640 2178 log.go:172] (0xc0003ca6e0) (1) Data frame handling\nI0212 14:11:28.115670 2178 log.go:172] (0xc0003ca6e0) (1) Data frame sent\nI0212 14:11:28.115728 2178 log.go:172] (0xc0006bca50) (0xc000688500) Stream removed, broadcasting: 3\nI0212 14:11:28.115766 2178 log.go:172] (0xc0006bca50) (0xc0003ca6e0) Stream removed, broadcasting: 1\nI0212 14:11:28.115813 2178 log.go:172] (0xc0006bca50) Go away received\nI0212 14:11:28.117769 2178 log.go:172] (0xc0006bca50) (0xc0003ca6e0) Stream removed, broadcasting: 1\nI0212 14:11:28.117807 2178 log.go:172] (0xc0006bca50) (0xc000688500) Stream removed, broadcasting: 3\nI0212 14:11:28.117819 2178 log.go:172] (0xc0006bca50) (0xc0003ca780) Stream removed, broadcasting: 5\n" Feb 12 14:11:28.128: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 12 14:11:28.128: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 12 14:11:28.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 12 14:11:28.646: INFO: stderr: "I0212 14:11:28.306286 2197 log.go:172] (0xc0005b0580) (0xc000654a00) Create stream\nI0212 14:11:28.306800 2197 log.go:172] (0xc0005b0580) (0xc000654a00) Stream added, broadcasting: 1\nI0212 14:11:28.313764 2197 log.go:172] (0xc0005b0580) Reply frame received for 1\nI0212 14:11:28.313867 2197 log.go:172] (0xc0005b0580) (0xc000654aa0) Create stream\nI0212 14:11:28.313881 2197 log.go:172] (0xc0005b0580) (0xc000654aa0) Stream added, broadcasting: 3\nI0212 14:11:28.316120 2197 log.go:172] (0xc0005b0580) Reply frame received for 3\nI0212 14:11:28.316157 2197 log.go:172] (0xc0005b0580) (0xc0006a2000) Create stream\nI0212 14:11:28.316171 2197 log.go:172] (0xc0005b0580) (0xc0006a2000) Stream added, broadcasting: 5\nI0212 14:11:28.317344 2197 log.go:172] (0xc0005b0580) Reply frame received for 5\nI0212 14:11:28.406991 2197 log.go:172] (0xc0005b0580) Data frame received for 5\nI0212 14:11:28.407130 2197 log.go:172] (0xc0006a2000) (5) Data frame handling\nI0212 14:11:28.407172 2197 log.go:172] (0xc0006a2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 14:11:28.434172 2197 log.go:172] (0xc0005b0580) Data frame received for 3\nI0212 14:11:28.434286 2197 log.go:172] (0xc000654aa0) (3) Data frame handling\nI0212 14:11:28.434312 2197 log.go:172] (0xc000654aa0) (3) Data frame sent\nI0212 14:11:28.615167 2197 log.go:172] (0xc0005b0580) Data frame received for 1\nI0212 14:11:28.615518 2197 log.go:172] (0xc0005b0580) (0xc000654aa0) Stream removed, broadcasting: 3\nI0212 14:11:28.615637 2197 log.go:172] (0xc000654a00) (1) Data frame handling\nI0212 14:11:28.615721 2197 log.go:172] (0xc000654a00) (1) Data frame sent\nI0212 14:11:28.615742 2197 log.go:172] (0xc0005b0580) (0xc0006a2000) Stream removed, broadcasting: 5\nI0212 14:11:28.615802 2197 log.go:172] (0xc0005b0580) (0xc000654a00) Stream removed, broadcasting: 1\nI0212 14:11:28.615842 2197 log.go:172] (0xc0005b0580) Go away received\nI0212 14:11:28.617494 2197 log.go:172] (0xc0005b0580) (0xc000654a00) Stream removed, broadcasting: 1\nI0212 14:11:28.617620 2197 log.go:172] (0xc0005b0580) (0xc000654aa0) Stream removed, broadcasting: 3\nI0212 14:11:28.617637 2197 log.go:172] (0xc0005b0580) (0xc0006a2000) Stream removed, broadcasting: 5\n" Feb 12 14:11:28.646: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 12 14:11:28.646: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 12 14:11:28.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Feb 12 14:11:29.185: INFO: stderr: "I0212 14:11:28.848864 2217 log.go:172] (0xc0009562c0) (0xc0008206e0) Create stream\nI0212 14:11:28.849456 2217 log.go:172] (0xc0009562c0) (0xc0008206e0) Stream added, broadcasting: 1\nI0212 14:11:28.861237 2217 log.go:172] (0xc0009562c0) Reply frame received for 1\nI0212 14:11:28.861352 2217 log.go:172] (0xc0009562c0) (0xc00063c320) Create stream\nI0212 14:11:28.861366 2217 log.go:172] (0xc0009562c0) (0xc00063c320) Stream added, broadcasting: 3\nI0212 14:11:28.864062 2217 log.go:172] (0xc0009562c0) Reply frame received for 3\nI0212 14:11:28.864186 2217 log.go:172] (0xc0009562c0) (0xc000820780) Create stream\nI0212 14:11:28.864267 2217 log.go:172] (0xc0009562c0) (0xc000820780) Stream added, broadcasting: 5\nI0212 14:11:28.865823 2217 log.go:172] (0xc0009562c0) Reply frame received for 5\nI0212 14:11:28.995863 2217 log.go:172] (0xc0009562c0) Data frame received for 5\nI0212 14:11:28.995956 2217 log.go:172] (0xc000820780) (5) Data frame handling\nI0212 14:11:28.995975 2217 log.go:172] (0xc000820780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0212 14:11:29.026452 2217 log.go:172] (0xc0009562c0) Data frame received for 3\nI0212 14:11:29.026478 2217 log.go:172] (0xc00063c320) (3) Data frame handling\nI0212 14:11:29.026497 2217 log.go:172] (0xc00063c320) (3) Data frame sent\nI0212 14:11:29.169733 2217 log.go:172] (0xc0009562c0) Data frame received for 1\nI0212 14:11:29.170360 2217 log.go:172] (0xc0009562c0) (0xc00063c320) Stream removed, broadcasting: 3\nI0212 14:11:29.170455 2217 log.go:172] (0xc0008206e0) (1) Data frame handling\nI0212 14:11:29.170604 2217 log.go:172] (0xc0008206e0) (1) Data frame sent\nI0212 14:11:29.170688 2217 log.go:172] (0xc0009562c0) (0xc000820780) Stream removed, broadcasting: 5\nI0212 14:11:29.170749 2217 log.go:172] (0xc0009562c0) (0xc0008206e0) Stream removed, broadcasting: 1\nI0212 14:11:29.170798 2217 log.go:172] (0xc0009562c0) Go away received\nI0212 14:11:29.172456 2217 log.go:172] (0xc0009562c0) (0xc0008206e0) Stream removed, broadcasting: 1\nI0212 14:11:29.172487 2217 log.go:172] (0xc0009562c0) (0xc00063c320) Stream removed, broadcasting: 3\nI0212 14:11:29.172501 2217 log.go:172] (0xc0009562c0) (0xc000820780) Stream removed, broadcasting: 5\n" Feb 12 14:11:29.186: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Feb 12 14:11:29.186: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Feb 12 14:11:29.186: INFO: Waiting for statefulset status.replicas updated to 0 Feb 12 14:11:29.245: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 12 14:11:29.245: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 12 14:11:29.245: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 12 14:11:29.261: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 14:11:29.261: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC }] Feb 12 14:11:29.261: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:29.261: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:29.261: INFO: Feb 12 14:11:29.261: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 12 14:11:31.481: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 14:11:31.481: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC }] Feb 12 14:11:31.481: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:31.481: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:31.481: INFO: Feb 12 14:11:31.481: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 12 14:11:32.494: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 14:11:32.494: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC }] Feb 12 14:11:32.494: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:32.494: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:32.494: INFO: Feb 12 14:11:32.494: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 12 14:11:34.693: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 14:11:34.693: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC }] Feb 12 14:11:34.693: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:34.693: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:34.693: INFO: Feb 12 14:11:34.693: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 12 14:11:35.702: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 14:11:35.702: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC }] Feb 12 14:11:35.702: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:35.702: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:35.702: INFO: Feb 12 14:11:35.702: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 12 14:11:36.716: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 14:11:36.716: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC }] Feb 12 14:11:36.716: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:36.716: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:36.716: INFO: Feb 12 14:11:36.716: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 12 14:11:37.725: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 14:11:37.725: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC }] Feb 12 14:11:37.725: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:37.726: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:37.726: INFO: Feb 12 14:11:37.726: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 12 14:11:38.741: INFO: POD NODE PHASE GRACE CONDITIONS Feb 12 14:11:38.741: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:10:41 +0000 UTC }] Feb 12 14:11:38.741: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:38.741: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:29 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-12 14:11:15 +0000 UTC }] Feb 12 14:11:38.741: INFO: Feb 12 14:11:38.741: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-961 Feb 12 14:11:39.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:11:39.999: INFO: rc: 1 Feb 12 14:11:40.000: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0027e3a70 exit status 1 true [0xc002b682a8 0xc002b682c0 0xc002b682d8] [0xc002b682a8 0xc002b682c0 0xc002b682d8] [0xc002b682b8 0xc002b682d0] [0xba6c50 0xba6c50] 0xc001a58420 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Feb 12 14:11:50.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:11:50.186: INFO: rc: 1 Feb 12 14:11:50.186: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010ba090 exit status 1 true [0xc00035cef8 0xc00035d040 0xc00035d2b0] [0xc00035cef8 0xc00035d040 0xc00035d2b0] [0xc00035cf30 0xc00035d1b0] [0xba6c50 0xba6c50] 0xc002cae8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:12:00.187: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:12:00.385: INFO: rc: 1 Feb 12 14:12:00.386: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010ba150 exit status 1 true [0xc00035d2f8 0xc00035d8d8 0xc00035d930] [0xc00035d2f8 0xc00035d8d8 0xc00035d930] [0xc00035d458 0xc00035d918] [0xba6c50 0xba6c50] 0xc002caf920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:12:10.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:12:10.674: INFO: rc: 1 Feb 12 14:12:10.675: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0007275c0 exit status 1 true [0xc00084c210 0xc00084c490 0xc00084c738] [0xc00084c210 0xc00084c490 0xc00084c738] [0xc00084c438 0xc00084c580] [0xba6c50 0xba6c50] 0xc002b72720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:12:20.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:12:20.785: INFO: rc: 1 Feb 12 14:12:20.785: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0007276b0 exit status 1 true [0xc00084c7e0 0xc00084c9f0 0xc00084cb88] [0xc00084c7e0 0xc00084c9f0 0xc00084cb88] [0xc00084c988 0xc00084cb38] [0xba6c50 0xba6c50] 0xc002b72a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:12:30.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:12:30.959: INFO: rc: 1 Feb 12 14:12:30.959: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000da4120 exit status 1 true [0xc000186000 0xc002744028 0xc002744058] [0xc000186000 0xc002744028 0xc002744058] [0xc002744010 0xc002744048] [0xba6c50 0xba6c50] 0xc002d0a240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:12:40.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:12:41.092: INFO: rc: 1 Feb 12 14:12:41.092: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000da4210 exit status 1 true [0xc002744068 0xc0027440a8 0xc002744108] [0xc002744068 0xc0027440a8 0xc002744108] [0xc002744090 0xc0027440e0] [0xba6c50 0xba6c50] 0xc002d0a540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:12:51.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:12:51.246: INFO: rc: 1 Feb 12 14:12:51.246: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010ba240 exit status 1 true [0xc00035d970 0xc00035d9f0 0xc00035da90] [0xc00035d970 0xc00035d9f0 0xc00035da90] [0xc00035d9d0 0xc00035da80] [0xba6c50 0xba6c50] 0xc002cafc20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:13:01.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:13:01.423: INFO: rc: 1 Feb 12 14:13:01.423: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0007277a0 exit status 1 true [0xc00084cbc8 0xc00084cd58 0xc00084cf58] [0xc00084cbc8 0xc00084cd58 0xc00084cf58] [0xc00084cc68 0xc00084cf30] [0xba6c50 0xba6c50] 0xc002b72d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:13:11.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:13:11.623: INFO: rc: 1 Feb 12 14:13:11.623: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010ba300 exit status 1 true [0xc00035daa8 0xc00035dbe8 0xc00035dcb8] [0xc00035daa8 0xc00035dbe8 0xc00035dcb8] [0xc00035db80 0xc00035dc98] [0xba6c50 0xba6c50] 0xc002caff80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:13:21.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:13:21.808: INFO: rc: 1 Feb 12 14:13:21.808: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002b160c0 exit status 1 true [0xc001cb6008 0xc001cb6068 0xc001cb60f0] [0xc001cb6008 0xc001cb6068 0xc001cb60f0] [0xc001cb6048 0xc001cb6088] [0xba6c50 0xba6c50] 0xc002c0e300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:13:31.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:13:31.998: INFO: rc: 1 Feb 12 14:13:31.999: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002b161b0 exit status 1 true [0xc001cb6188 0xc001cb6268 0xc001cb63a0] [0xc001cb6188 0xc001cb6268 0xc001cb63a0] [0xc001cb6258 0xc001cb6330] [0xba6c50 0xba6c50] 0xc002c0e780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:13:41.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:13:42.211: INFO: rc: 1 Feb 12 14:13:42.211: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010ba3f0 exit status 1 true [0xc00035dcf8 0xc00035ddc0 0xc00035dec8] [0xc00035dcf8 0xc00035ddc0 0xc00035dec8] [0xc00035dd58 0xc00035de60] [0xba6c50 0xba6c50] 0xc001d56300 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:13:52.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:13:52.465: INFO: rc: 1 Feb 12 14:13:52.466: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0007275f0 exit status 1 true [0xc00084c210 0xc00084c490 0xc00084c738] [0xc00084c210 0xc00084c490 0xc00084c738] [0xc00084c438 0xc00084c580] [0xba6c50 0xba6c50] 0xc002cae8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:14:02.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:14:02.599: INFO: rc: 1 Feb 12 14:14:02.600: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000727710 exit status 1 true [0xc00084c7e0 0xc00084c9f0 0xc00084cb88] [0xc00084c7e0 0xc00084c9f0 0xc00084cb88] [0xc00084c988 0xc00084cb38] [0xba6c50 0xba6c50] 0xc002caf920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:14:12.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:14:12.801: INFO: rc: 1 Feb 12 14:14:12.801: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000727800 exit status 1 true [0xc00084cbc8 0xc00084cd58 0xc00084cf58] [0xc00084cbc8 0xc00084cd58 0xc00084cf58] [0xc00084cc68 0xc00084cf30] [0xba6c50 0xba6c50] 0xc002cafc20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:14:22.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:14:22.995: INFO: rc: 1 Feb 12 14:14:22.995: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000da4090 exit status 1 true [0xc001cb6008 0xc001cb6068 0xc001cb60f0] [0xc001cb6008 0xc001cb6068 0xc001cb60f0] [0xc001cb6048 0xc001cb6088] [0xba6c50 0xba6c50] 0xc002b72720 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:14:32.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:14:33.131: INFO: rc: 1 Feb 12 14:14:33.131: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0007278f0 exit status 1 true [0xc00084cf68 0xc00084d160 0xc00084d3e8] [0xc00084cf68 0xc00084d160 0xc00084d3e8] [0xc00084d100 0xc00084d348] [0xba6c50 0xba6c50] 0xc002caff80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:14:43.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:14:43.351: INFO: rc: 1 Feb 12 14:14:43.352: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010ba0c0 exit status 1 true [0xc002744000 0xc002744038 0xc002744068] [0xc002744000 0xc002744038 0xc002744068] [0xc002744028 0xc002744058] [0xba6c50 0xba6c50] 0xc002d0a240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:14:53.352: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:14:53.492: INFO: rc: 1 Feb 12 14:14:53.493: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010ba1e0 exit status 1 true [0xc002744080 0xc0027440b8 0xc002744110] [0xc002744080 0xc0027440b8 0xc002744110] [0xc0027440a8 0xc002744108] [0xba6c50 0xba6c50] 0xc002d0a540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:15:03.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:15:03.702: INFO: rc: 1 Feb 12 14:15:03.703: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000727a40 exit status 1 true [0xc00084d440 0xc00084d620 0xc00084d680] [0xc00084d440 0xc00084d620 0xc00084d680] [0xc00084d5e0 0xc00084d670] [0xba6c50 0xba6c50] 0xc002c0e360 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:15:13.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:15:13.871: INFO: rc: 1 Feb 12 14:15:13.871: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc000727b30 exit status 1 true [0xc00084d6e0 0xc00084d808 0xc00084da38] [0xc00084d6e0 0xc00084d808 0xc00084da38] [0xc00084d7b0 0xc00084d9c8] [0xba6c50 0xba6c50] 0xc002c0e7e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 12 14:15:23.872: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Feb 12 14:15:24.003: INFO: rc: 1 Feb 12 14:15:24.003: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-961 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0010ba2d0 exit status 1 true [0xc002744118 0xc002744130 0xc002744148] [0xc002744118 0xc002744130 0xc002744148] [0xc002744128 0xc002744140] [0xba6c50 0xba6c50] 0xc002d0aae0