I0423 12:55:43.899565 6 e2e.go:243] Starting e2e run "3574706b-c38b-47d5-b1f9-8cd6bffcd536" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1587646543 - Will randomize all specs Will run 215 of 4412 specs Apr 23 12:55:44.079: INFO: >>> kubeConfig: /root/.kube/config Apr 23 12:55:44.082: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 23 12:55:44.103: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 23 12:55:44.140: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 23 12:55:44.140: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 23 12:55:44.140: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 23 12:55:44.150: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 23 12:55:44.150: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 23 12:55:44.150: INFO: e2e test version: v1.15.11 Apr 23 12:55:44.151: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 12:55:44.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns Apr 23 12:55:44.212: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3461.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3461.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3461.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3461.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 23 12:55:50.296: INFO: DNS probes using dns-test-a584af57-e720-41c7-abb3-955305ed251e succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3461.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-3461.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3461.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-3461.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 23 12:55:56.442: INFO: File wheezy_udp@dns-test-service-3.dns-3461.svc.cluster.local from pod dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 12:55:56.446: INFO: File jessie_udp@dns-test-service-3.dns-3461.svc.cluster.local from pod dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 12:55:56.446: INFO: Lookups using dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 failed for: [wheezy_udp@dns-test-service-3.dns-3461.svc.cluster.local jessie_udp@dns-test-service-3.dns-3461.svc.cluster.local] Apr 23 12:56:01.452: INFO: File wheezy_udp@dns-test-service-3.dns-3461.svc.cluster.local from pod dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 12:56:01.456: INFO: File jessie_udp@dns-test-service-3.dns-3461.svc.cluster.local from pod dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 12:56:01.456: INFO: Lookups using dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 failed for: [wheezy_udp@dns-test-service-3.dns-3461.svc.cluster.local jessie_udp@dns-test-service-3.dns-3461.svc.cluster.local] Apr 23 12:56:06.451: INFO: File wheezy_udp@dns-test-service-3.dns-3461.svc.cluster.local from pod dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 12:56:06.455: INFO: File jessie_udp@dns-test-service-3.dns-3461.svc.cluster.local from pod dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 12:56:06.455: INFO: Lookups using dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 failed for: [wheezy_udp@dns-test-service-3.dns-3461.svc.cluster.local jessie_udp@dns-test-service-3.dns-3461.svc.cluster.local] Apr 23 12:56:11.451: INFO: File wheezy_udp@dns-test-service-3.dns-3461.svc.cluster.local from pod dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 12:56:11.456: INFO: File jessie_udp@dns-test-service-3.dns-3461.svc.cluster.local from pod dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 12:56:11.456: INFO: Lookups using dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 failed for: [wheezy_udp@dns-test-service-3.dns-3461.svc.cluster.local jessie_udp@dns-test-service-3.dns-3461.svc.cluster.local] Apr 23 12:56:16.451: INFO: File wheezy_udp@dns-test-service-3.dns-3461.svc.cluster.local from pod dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 12:56:16.455: INFO: File jessie_udp@dns-test-service-3.dns-3461.svc.cluster.local from pod dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 23 12:56:16.455: INFO: Lookups using dns-3461/dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 failed for: [wheezy_udp@dns-test-service-3.dns-3461.svc.cluster.local jessie_udp@dns-test-service-3.dns-3461.svc.cluster.local] Apr 23 12:56:21.454: INFO: DNS probes using dns-test-596700ae-8a2a-43e2-b705-80bbbdf464c3 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3461.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-3461.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-3461.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-3461.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 23 12:56:28.133: INFO: DNS probes using dns-test-5710acc8-ba15-4a9c-936d-6d650bd1512c succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 12:56:28.225: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3461" for this suite. Apr 23 12:56:34.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 12:56:34.342: INFO: namespace dns-3461 deletion completed in 6.110074942s • [SLOW TEST:50.191 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 12:56:34.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 12:56:34.394: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23731e5b-1eab-4f02-ade3-8e2abc1d8dc0" in namespace "downward-api-6215" to be "success or failure" Apr 23 12:56:34.409: INFO: Pod "downwardapi-volume-23731e5b-1eab-4f02-ade3-8e2abc1d8dc0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.545484ms Apr 23 12:56:36.413: INFO: Pod "downwardapi-volume-23731e5b-1eab-4f02-ade3-8e2abc1d8dc0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019194995s Apr 23 12:56:38.418: INFO: Pod "downwardapi-volume-23731e5b-1eab-4f02-ade3-8e2abc1d8dc0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023798823s STEP: Saw pod success Apr 23 12:56:38.418: INFO: Pod "downwardapi-volume-23731e5b-1eab-4f02-ade3-8e2abc1d8dc0" satisfied condition "success or failure" Apr 23 12:56:38.421: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-23731e5b-1eab-4f02-ade3-8e2abc1d8dc0 container client-container: STEP: delete the pod Apr 23 12:56:38.446: INFO: Waiting for pod downwardapi-volume-23731e5b-1eab-4f02-ade3-8e2abc1d8dc0 to disappear Apr 23 12:56:38.464: INFO: Pod downwardapi-volume-23731e5b-1eab-4f02-ade3-8e2abc1d8dc0 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 12:56:38.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6215" for this suite. Apr 23 12:56:44.511: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 12:56:44.623: INFO: namespace downward-api-6215 deletion completed in 6.155901712s • [SLOW TEST:10.281 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 12:56:44.624: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-xhz5 STEP: Creating a pod to test atomic-volume-subpath Apr 23 12:56:44.820: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xhz5" in namespace "subpath-950" to be "success or failure" Apr 23 12:56:44.863: INFO: Pod "pod-subpath-test-configmap-xhz5": Phase="Pending", Reason="", readiness=false. Elapsed: 42.184238ms Apr 23 12:56:46.867: INFO: Pod "pod-subpath-test-configmap-xhz5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046425629s Apr 23 12:56:48.871: INFO: Pod "pod-subpath-test-configmap-xhz5": Phase="Running", Reason="", readiness=true. Elapsed: 4.050769219s Apr 23 12:56:50.876: INFO: Pod "pod-subpath-test-configmap-xhz5": Phase="Running", Reason="", readiness=true. Elapsed: 6.055373475s Apr 23 12:56:52.880: INFO: Pod "pod-subpath-test-configmap-xhz5": Phase="Running", Reason="", readiness=true. Elapsed: 8.059742842s Apr 23 12:56:54.884: INFO: Pod "pod-subpath-test-configmap-xhz5": Phase="Running", Reason="", readiness=true. Elapsed: 10.063577869s Apr 23 12:56:56.889: INFO: Pod "pod-subpath-test-configmap-xhz5": Phase="Running", Reason="", readiness=true. Elapsed: 12.068275206s Apr 23 12:56:58.893: INFO: Pod "pod-subpath-test-configmap-xhz5": Phase="Running", Reason="", readiness=true. Elapsed: 14.072622685s Apr 23 12:57:00.898: INFO: Pod "pod-subpath-test-configmap-xhz5": Phase="Running", Reason="", readiness=true. Elapsed: 16.077173556s Apr 23 12:57:02.901: INFO: Pod "pod-subpath-test-configmap-xhz5": Phase="Running", Reason="", readiness=true. Elapsed: 18.08075704s Apr 23 12:57:04.905: INFO: Pod "pod-subpath-test-configmap-xhz5": Phase="Running", Reason="", readiness=true. Elapsed: 20.085036789s Apr 23 12:57:06.910: INFO: Pod "pod-subpath-test-configmap-xhz5": Phase="Running", Reason="", readiness=true. Elapsed: 22.089431125s Apr 23 12:57:08.914: INFO: Pod "pod-subpath-test-configmap-xhz5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.09375008s STEP: Saw pod success Apr 23 12:57:08.914: INFO: Pod "pod-subpath-test-configmap-xhz5" satisfied condition "success or failure" Apr 23 12:57:08.917: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-xhz5 container test-container-subpath-configmap-xhz5: STEP: delete the pod Apr 23 12:57:08.969: INFO: Waiting for pod pod-subpath-test-configmap-xhz5 to disappear Apr 23 12:57:08.975: INFO: Pod pod-subpath-test-configmap-xhz5 no longer exists STEP: Deleting pod pod-subpath-test-configmap-xhz5 Apr 23 12:57:08.975: INFO: Deleting pod "pod-subpath-test-configmap-xhz5" in namespace "subpath-950" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 12:57:08.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-950" for this suite. Apr 23 12:57:14.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 12:57:15.068: INFO: namespace subpath-950 deletion completed in 6.08803753s • [SLOW TEST:30.445 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 12:57:15.069: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 23 12:57:23.224: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 23 12:57:23.244: INFO: Pod pod-with-prestop-http-hook still exists Apr 23 12:57:25.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 23 12:57:25.248: INFO: Pod pod-with-prestop-http-hook still exists Apr 23 12:57:27.244: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 23 12:57:27.248: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 12:57:27.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7106" for this suite. Apr 23 12:57:49.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 12:57:49.357: INFO: namespace container-lifecycle-hook-7106 deletion completed in 22.095488833s • [SLOW TEST:34.287 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 12:57:49.357: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-022c0bef-43a6-4353-9ea5-0d47e5f330cc STEP: Creating the pod STEP: Updating configmap configmap-test-upd-022c0bef-43a6-4353-9ea5-0d47e5f330cc STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 12:59:20.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7190" for this suite. Apr 23 12:59:42.113: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 12:59:42.276: INFO: namespace configmap-7190 deletion completed in 22.229548264s • [SLOW TEST:112.919 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 12:59:42.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 23 12:59:45.502: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 12:59:45.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2787" for this suite. Apr 23 12:59:51.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 12:59:51.696: INFO: namespace container-runtime-2787 deletion completed in 6.142767616s • [SLOW TEST:9.420 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 12:59:51.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Apr 23 12:59:51.767: INFO: Waiting up to 5m0s for pod "var-expansion-7554a8ae-d4a5-4c34-a535-d62aff8cceb8" in namespace "var-expansion-7503" to be "success or failure" Apr 23 12:59:51.796: INFO: Pod "var-expansion-7554a8ae-d4a5-4c34-a535-d62aff8cceb8": Phase="Pending", Reason="", readiness=false. Elapsed: 29.449929ms Apr 23 12:59:53.800: INFO: Pod "var-expansion-7554a8ae-d4a5-4c34-a535-d62aff8cceb8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033081714s Apr 23 12:59:55.804: INFO: Pod "var-expansion-7554a8ae-d4a5-4c34-a535-d62aff8cceb8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037439108s STEP: Saw pod success Apr 23 12:59:55.804: INFO: Pod "var-expansion-7554a8ae-d4a5-4c34-a535-d62aff8cceb8" satisfied condition "success or failure" Apr 23 12:59:55.807: INFO: Trying to get logs from node iruya-worker pod var-expansion-7554a8ae-d4a5-4c34-a535-d62aff8cceb8 container dapi-container: STEP: delete the pod Apr 23 12:59:55.827: INFO: Waiting for pod var-expansion-7554a8ae-d4a5-4c34-a535-d62aff8cceb8 to disappear Apr 23 12:59:55.830: INFO: Pod var-expansion-7554a8ae-d4a5-4c34-a535-d62aff8cceb8 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 12:59:55.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7503" for this suite. Apr 23 13:00:01.857: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:00:01.932: INFO: namespace var-expansion-7503 deletion completed in 6.098169535s • [SLOW TEST:10.235 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:00:01.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 13:00:02.022: INFO: Waiting up to 5m0s for pod "downwardapi-volume-065b6bd8-9944-48e4-903d-ea943f628817" in namespace "downward-api-7398" to be "success or failure" Apr 23 13:00:02.026: INFO: Pod "downwardapi-volume-065b6bd8-9944-48e4-903d-ea943f628817": Phase="Pending", Reason="", readiness=false. Elapsed: 3.90445ms Apr 23 13:00:04.030: INFO: Pod "downwardapi-volume-065b6bd8-9944-48e4-903d-ea943f628817": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008170699s Apr 23 13:00:06.034: INFO: Pod "downwardapi-volume-065b6bd8-9944-48e4-903d-ea943f628817": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01244322s STEP: Saw pod success Apr 23 13:00:06.034: INFO: Pod "downwardapi-volume-065b6bd8-9944-48e4-903d-ea943f628817" satisfied condition "success or failure" Apr 23 13:00:06.037: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-065b6bd8-9944-48e4-903d-ea943f628817 container client-container: STEP: delete the pod Apr 23 13:00:06.110: INFO: Waiting for pod downwardapi-volume-065b6bd8-9944-48e4-903d-ea943f628817 to disappear Apr 23 13:00:06.116: INFO: Pod downwardapi-volume-065b6bd8-9944-48e4-903d-ea943f628817 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:00:06.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7398" for this suite. Apr 23 13:00:12.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:00:12.211: INFO: namespace downward-api-7398 deletion completed in 6.092557145s • [SLOW TEST:10.279 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:00:12.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:00:38.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9190" for this suite. Apr 23 13:00:44.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:00:45.020: INFO: namespace container-runtime-9190 deletion completed in 6.07717331s • [SLOW TEST:32.809 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:00:45.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-b986064f-fdb7-4efb-a8c2-cfd97eaf6519 STEP: Creating a pod to test consume secrets Apr 23 13:00:45.095: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-13d4da05-fbb0-49ab-8bb2-0b867360d35b" in namespace "projected-5852" to be "success or failure" Apr 23 13:00:45.098: INFO: Pod "pod-projected-secrets-13d4da05-fbb0-49ab-8bb2-0b867360d35b": Phase="Pending", Reason="", readiness=false. Elapsed: 3.077032ms Apr 23 13:00:47.102: INFO: Pod "pod-projected-secrets-13d4da05-fbb0-49ab-8bb2-0b867360d35b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007454398s Apr 23 13:00:49.106: INFO: Pod "pod-projected-secrets-13d4da05-fbb0-49ab-8bb2-0b867360d35b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010769634s STEP: Saw pod success Apr 23 13:00:49.106: INFO: Pod "pod-projected-secrets-13d4da05-fbb0-49ab-8bb2-0b867360d35b" satisfied condition "success or failure" Apr 23 13:00:49.108: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-13d4da05-fbb0-49ab-8bb2-0b867360d35b container projected-secret-volume-test: STEP: delete the pod Apr 23 13:00:49.124: INFO: Waiting for pod pod-projected-secrets-13d4da05-fbb0-49ab-8bb2-0b867360d35b to disappear Apr 23 13:00:49.128: INFO: Pod pod-projected-secrets-13d4da05-fbb0-49ab-8bb2-0b867360d35b no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:00:49.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5852" for this suite. Apr 23 13:00:55.172: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:00:55.247: INFO: namespace projected-5852 deletion completed in 6.116155245s • [SLOW TEST:10.227 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:00:55.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-becc6c77-7709-4d96-b9db-5471f08acec3 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-becc6c77-7709-4d96-b9db-5471f08acec3 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:01:01.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1596" for this suite. Apr 23 13:01:23.422: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:01:23.491: INFO: namespace projected-1596 deletion completed in 22.085730169s • [SLOW TEST:28.243 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:01:23.492: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 23 13:01:23.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7560' Apr 23 13:01:25.900: INFO: stderr: "" Apr 23 13:01:25.901: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 23 13:01:25.901: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7560' Apr 23 13:01:26.026: INFO: stderr: "" Apr 23 13:01:26.026: INFO: stdout: "update-demo-nautilus-drq8h update-demo-nautilus-hrmt4 " Apr 23 13:01:26.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drq8h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7560' Apr 23 13:01:26.130: INFO: stderr: "" Apr 23 13:01:26.130: INFO: stdout: "" Apr 23 13:01:26.130: INFO: update-demo-nautilus-drq8h is created but not running Apr 23 13:01:31.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7560' Apr 23 13:01:31.247: INFO: stderr: "" Apr 23 13:01:31.247: INFO: stdout: "update-demo-nautilus-drq8h update-demo-nautilus-hrmt4 " Apr 23 13:01:31.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drq8h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7560' Apr 23 13:01:31.344: INFO: stderr: "" Apr 23 13:01:31.344: INFO: stdout: "true" Apr 23 13:01:31.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-drq8h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7560' Apr 23 13:01:31.450: INFO: stderr: "" Apr 23 13:01:31.450: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 13:01:31.450: INFO: validating pod update-demo-nautilus-drq8h Apr 23 13:01:31.455: INFO: got data: { "image": "nautilus.jpg" } Apr 23 13:01:31.455: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 13:01:31.455: INFO: update-demo-nautilus-drq8h is verified up and running Apr 23 13:01:31.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hrmt4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7560' Apr 23 13:01:31.552: INFO: stderr: "" Apr 23 13:01:31.553: INFO: stdout: "true" Apr 23 13:01:31.553: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-hrmt4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7560' Apr 23 13:01:31.650: INFO: stderr: "" Apr 23 13:01:31.650: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 13:01:31.650: INFO: validating pod update-demo-nautilus-hrmt4 Apr 23 13:01:31.655: INFO: got data: { "image": "nautilus.jpg" } Apr 23 13:01:31.655: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 13:01:31.655: INFO: update-demo-nautilus-hrmt4 is verified up and running STEP: using delete to clean up resources Apr 23 13:01:31.655: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7560' Apr 23 13:01:31.762: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 13:01:31.763: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 23 13:01:31.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7560' Apr 23 13:01:31.855: INFO: stderr: "No resources found.\n" Apr 23 13:01:31.855: INFO: stdout: "" Apr 23 13:01:31.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7560 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 23 13:01:31.940: INFO: stderr: "" Apr 23 13:01:31.940: INFO: stdout: "update-demo-nautilus-drq8h\nupdate-demo-nautilus-hrmt4\n" Apr 23 13:01:32.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7560' Apr 23 13:01:32.542: INFO: stderr: "No resources found.\n" Apr 23 13:01:32.542: INFO: stdout: "" Apr 23 13:01:32.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7560 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 23 13:01:32.634: INFO: stderr: "" Apr 23 13:01:32.634: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:01:32.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7560" for this suite. Apr 23 13:01:38.790: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:01:38.878: INFO: namespace kubectl-7560 deletion completed in 6.240809109s • [SLOW TEST:15.386 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:01:38.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-3438 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-3438 STEP: Deleting pre-stop pod Apr 23 13:01:52.010: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:01:52.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-3438" for this suite. Apr 23 13:02:30.074: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:02:30.146: INFO: namespace prestop-3438 deletion completed in 38.121355661s • [SLOW TEST:51.268 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:02:30.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:02:34.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6103" for this suite. Apr 23 13:02:40.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:02:40.459: INFO: namespace emptydir-wrapper-6103 deletion completed in 6.122649476s • [SLOW TEST:10.313 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:02:40.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:02:40.544: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.531363ms) Apr 23 13:02:40.546: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.713636ms) Apr 23 13:02:40.549: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.927167ms) Apr 23 13:02:40.552: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.555058ms) Apr 23 13:02:40.555: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.845426ms) Apr 23 13:02:40.558: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.200106ms) Apr 23 13:02:40.562: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.435896ms) Apr 23 13:02:40.565: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.935388ms) Apr 23 13:02:40.568: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.533378ms) Apr 23 13:02:40.571: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.203798ms) Apr 23 13:02:40.575: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.395611ms) Apr 23 13:02:40.578: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.227573ms) Apr 23 13:02:40.594: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 16.132424ms) Apr 23 13:02:40.598: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.557043ms) Apr 23 13:02:40.600: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.221195ms) Apr 23 13:02:40.603: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.483789ms) Apr 23 13:02:40.605: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.224478ms) Apr 23 13:02:40.607: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 1.984088ms) Apr 23 13:02:40.609: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.449078ms) Apr 23 13:02:40.612: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.554664ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:02:40.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3375" for this suite. Apr 23 13:02:46.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:02:46.711: INFO: namespace proxy-3375 deletion completed in 6.096092248s • [SLOW TEST:6.251 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:02:46.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Apr 23 13:02:46.811: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix283147327/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:02:46.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6036" for this suite. Apr 23 13:02:52.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:02:52.978: INFO: namespace kubectl-6036 deletion completed in 6.095055229s • [SLOW TEST:6.267 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:02:52.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5368.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5368.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5368.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 235.149.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.149.235_udp@PTR;check="$$(dig +tcp +noall +answer +search 235.149.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.149.235_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5368.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5368.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5368.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5368.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5368.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5368.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 235.149.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.149.235_udp@PTR;check="$$(dig +tcp +noall +answer +search 235.149.101.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.101.149.235_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 23 13:02:59.139: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:02:59.142: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:02:59.144: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:02:59.148: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:02:59.177: INFO: Unable to read jessie_udp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:02:59.180: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:02:59.183: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:02:59.186: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:02:59.206: INFO: Lookups using dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211 failed for: [wheezy_udp@dns-test-service.dns-5368.svc.cluster.local wheezy_tcp@dns-test-service.dns-5368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local jessie_udp@dns-test-service.dns-5368.svc.cluster.local jessie_tcp@dns-test-service.dns-5368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local] Apr 23 13:03:04.210: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:04.214: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:04.217: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:04.220: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:04.245: INFO: Unable to read jessie_udp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:04.248: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:04.250: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:04.253: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:04.268: INFO: Lookups using dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211 failed for: [wheezy_udp@dns-test-service.dns-5368.svc.cluster.local wheezy_tcp@dns-test-service.dns-5368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local jessie_udp@dns-test-service.dns-5368.svc.cluster.local jessie_tcp@dns-test-service.dns-5368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local] Apr 23 13:03:09.211: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:09.215: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:09.219: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:09.222: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:09.263: INFO: Unable to read jessie_udp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:09.267: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:09.269: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:09.272: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:09.290: INFO: Lookups using dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211 failed for: [wheezy_udp@dns-test-service.dns-5368.svc.cluster.local wheezy_tcp@dns-test-service.dns-5368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local jessie_udp@dns-test-service.dns-5368.svc.cluster.local jessie_tcp@dns-test-service.dns-5368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local] Apr 23 13:03:14.211: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:14.215: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:14.220: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:14.222: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:14.240: INFO: Unable to read jessie_udp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:14.242: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:14.245: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:14.247: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:14.264: INFO: Lookups using dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211 failed for: [wheezy_udp@dns-test-service.dns-5368.svc.cluster.local wheezy_tcp@dns-test-service.dns-5368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local jessie_udp@dns-test-service.dns-5368.svc.cluster.local jessie_tcp@dns-test-service.dns-5368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local] Apr 23 13:03:19.211: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:19.215: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:19.218: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:19.222: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:19.270: INFO: Unable to read jessie_udp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:19.278: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:19.281: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:19.284: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:19.298: INFO: Lookups using dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211 failed for: [wheezy_udp@dns-test-service.dns-5368.svc.cluster.local wheezy_tcp@dns-test-service.dns-5368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local jessie_udp@dns-test-service.dns-5368.svc.cluster.local jessie_tcp@dns-test-service.dns-5368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local] Apr 23 13:03:24.219: INFO: Unable to read wheezy_udp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:24.223: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:24.226: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:24.229: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:24.246: INFO: Unable to read jessie_udp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:24.249: INFO: Unable to read jessie_tcp@dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:24.251: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:24.254: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local from pod dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211: the server could not find the requested resource (get pods dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211) Apr 23 13:03:24.269: INFO: Lookups using dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211 failed for: [wheezy_udp@dns-test-service.dns-5368.svc.cluster.local wheezy_tcp@dns-test-service.dns-5368.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local jessie_udp@dns-test-service.dns-5368.svc.cluster.local jessie_tcp@dns-test-service.dns-5368.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-5368.svc.cluster.local] Apr 23 13:03:29.292: INFO: DNS probes using dns-5368/dns-test-0a2fe3fe-750f-4823-ba8b-63caa1227211 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:03:29.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5368" for this suite. Apr 23 13:03:35.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:03:36.094: INFO: namespace dns-5368 deletion completed in 6.109201211s • [SLOW TEST:43.116 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:03:36.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-tttd STEP: Creating a pod to test atomic-volume-subpath Apr 23 13:03:36.177: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-tttd" in namespace "subpath-8461" to be "success or failure" Apr 23 13:03:36.180: INFO: Pod "pod-subpath-test-secret-tttd": Phase="Pending", Reason="", readiness=false. Elapsed: 3.447264ms Apr 23 13:03:38.184: INFO: Pod "pod-subpath-test-secret-tttd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007710371s Apr 23 13:03:40.189: INFO: Pod "pod-subpath-test-secret-tttd": Phase="Running", Reason="", readiness=true. Elapsed: 4.012368522s Apr 23 13:03:42.193: INFO: Pod "pod-subpath-test-secret-tttd": Phase="Running", Reason="", readiness=true. Elapsed: 6.016836657s Apr 23 13:03:44.198: INFO: Pod "pod-subpath-test-secret-tttd": Phase="Running", Reason="", readiness=true. Elapsed: 8.02167037s Apr 23 13:03:46.203: INFO: Pod "pod-subpath-test-secret-tttd": Phase="Running", Reason="", readiness=true. Elapsed: 10.026082902s Apr 23 13:03:48.207: INFO: Pod "pod-subpath-test-secret-tttd": Phase="Running", Reason="", readiness=true. Elapsed: 12.030656822s Apr 23 13:03:50.211: INFO: Pod "pod-subpath-test-secret-tttd": Phase="Running", Reason="", readiness=true. Elapsed: 14.034715849s Apr 23 13:03:52.216: INFO: Pod "pod-subpath-test-secret-tttd": Phase="Running", Reason="", readiness=true. Elapsed: 16.03904679s Apr 23 13:03:54.220: INFO: Pod "pod-subpath-test-secret-tttd": Phase="Running", Reason="", readiness=true. Elapsed: 18.043265775s Apr 23 13:03:56.224: INFO: Pod "pod-subpath-test-secret-tttd": Phase="Running", Reason="", readiness=true. Elapsed: 20.047733918s Apr 23 13:03:58.228: INFO: Pod "pod-subpath-test-secret-tttd": Phase="Running", Reason="", readiness=true. Elapsed: 22.051819049s Apr 23 13:04:00.233: INFO: Pod "pod-subpath-test-secret-tttd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.05619793s STEP: Saw pod success Apr 23 13:04:00.233: INFO: Pod "pod-subpath-test-secret-tttd" satisfied condition "success or failure" Apr 23 13:04:00.236: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-tttd container test-container-subpath-secret-tttd: STEP: delete the pod Apr 23 13:04:00.261: INFO: Waiting for pod pod-subpath-test-secret-tttd to disappear Apr 23 13:04:00.265: INFO: Pod pod-subpath-test-secret-tttd no longer exists STEP: Deleting pod pod-subpath-test-secret-tttd Apr 23 13:04:00.265: INFO: Deleting pod "pod-subpath-test-secret-tttd" in namespace "subpath-8461" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:04:00.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8461" for this suite. Apr 23 13:04:06.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:04:06.349: INFO: namespace subpath-8461 deletion completed in 6.077221149s • [SLOW TEST:30.254 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:04:06.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:04:06.391: INFO: Creating ReplicaSet my-hostname-basic-9f59f6bc-55a4-450e-a51e-dba8f8738568 Apr 23 13:04:06.417: INFO: Pod name my-hostname-basic-9f59f6bc-55a4-450e-a51e-dba8f8738568: Found 0 pods out of 1 Apr 23 13:04:11.421: INFO: Pod name my-hostname-basic-9f59f6bc-55a4-450e-a51e-dba8f8738568: Found 1 pods out of 1 Apr 23 13:04:11.422: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-9f59f6bc-55a4-450e-a51e-dba8f8738568" is running Apr 23 13:04:11.424: INFO: Pod "my-hostname-basic-9f59f6bc-55a4-450e-a51e-dba8f8738568-4spvz" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 13:04:06 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 13:04:09 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 13:04:09 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 13:04:06 +0000 UTC Reason: Message:}]) Apr 23 13:04:11.425: INFO: Trying to dial the pod Apr 23 13:04:16.436: INFO: Controller my-hostname-basic-9f59f6bc-55a4-450e-a51e-dba8f8738568: Got expected result from replica 1 [my-hostname-basic-9f59f6bc-55a4-450e-a51e-dba8f8738568-4spvz]: "my-hostname-basic-9f59f6bc-55a4-450e-a51e-dba8f8738568-4spvz", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:04:16.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-512" for this suite. Apr 23 13:04:22.455: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:04:22.530: INFO: namespace replicaset-512 deletion completed in 6.090295992s • [SLOW TEST:16.182 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:04:22.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-9622/configmap-test-b4de4ff8-28c0-4a64-a30a-7069eae31251 STEP: Creating a pod to test consume configMaps Apr 23 13:04:22.610: INFO: Waiting up to 5m0s for pod "pod-configmaps-a5e0e04a-3934-4913-a5f8-19eb74bdd4e1" in namespace "configmap-9622" to be "success or failure" Apr 23 13:04:22.635: INFO: Pod "pod-configmaps-a5e0e04a-3934-4913-a5f8-19eb74bdd4e1": Phase="Pending", Reason="", readiness=false. Elapsed: 24.327758ms Apr 23 13:04:24.639: INFO: Pod "pod-configmaps-a5e0e04a-3934-4913-a5f8-19eb74bdd4e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028511809s Apr 23 13:04:26.643: INFO: Pod "pod-configmaps-a5e0e04a-3934-4913-a5f8-19eb74bdd4e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032417163s STEP: Saw pod success Apr 23 13:04:26.643: INFO: Pod "pod-configmaps-a5e0e04a-3934-4913-a5f8-19eb74bdd4e1" satisfied condition "success or failure" Apr 23 13:04:26.646: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-a5e0e04a-3934-4913-a5f8-19eb74bdd4e1 container env-test: STEP: delete the pod Apr 23 13:04:26.705: INFO: Waiting for pod pod-configmaps-a5e0e04a-3934-4913-a5f8-19eb74bdd4e1 to disappear Apr 23 13:04:26.720: INFO: Pod pod-configmaps-a5e0e04a-3934-4913-a5f8-19eb74bdd4e1 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:04:26.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9622" for this suite. Apr 23 13:04:32.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:04:32.830: INFO: namespace configmap-9622 deletion completed in 6.106819188s • [SLOW TEST:10.300 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:04:32.831: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-a96a85b6-ca8b-4f16-9a23-16d8977768f6 STEP: Creating secret with name s-test-opt-upd-1e0a5936-d8c4-4598-a8cd-d04b04d36fad STEP: Creating the pod STEP: Deleting secret s-test-opt-del-a96a85b6-ca8b-4f16-9a23-16d8977768f6 STEP: Updating secret s-test-opt-upd-1e0a5936-d8c4-4598-a8cd-d04b04d36fad STEP: Creating secret with name s-test-opt-create-09e3bec6-d14e-472f-99d3-09b29f01dde7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:04:41.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2824" for this suite. Apr 23 13:05:03.052: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:05:03.128: INFO: namespace projected-2824 deletion completed in 22.093390115s • [SLOW TEST:30.297 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:05:03.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-17fbbcec-6e9d-49e5-aa16-bc2f3fcd8e9f in namespace container-probe-2655 Apr 23 13:05:07.228: INFO: Started pod test-webserver-17fbbcec-6e9d-49e5-aa16-bc2f3fcd8e9f in namespace container-probe-2655 STEP: checking the pod's current state and verifying that restartCount is present Apr 23 13:05:07.231: INFO: Initial restart count of pod test-webserver-17fbbcec-6e9d-49e5-aa16-bc2f3fcd8e9f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:09:08.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2655" for this suite. Apr 23 13:09:14.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:09:14.489: INFO: namespace container-probe-2655 deletion completed in 6.171696231s • [SLOW TEST:251.360 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:09:14.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:09:18.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4435" for this suite. Apr 23 13:10:04.604: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:10:04.702: INFO: namespace kubelet-test-4435 deletion completed in 46.114792307s • [SLOW TEST:50.213 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:10:04.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:10:04.779: INFO: Creating deployment "nginx-deployment" Apr 23 13:10:04.784: INFO: Waiting for observed generation 1 Apr 23 13:10:06.794: INFO: Waiting for all required pods to come up Apr 23 13:10:06.797: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 23 13:10:14.807: INFO: Waiting for deployment "nginx-deployment" to complete Apr 23 13:10:14.812: INFO: Updating deployment "nginx-deployment" with a non-existent image Apr 23 13:10:14.853: INFO: Updating deployment nginx-deployment Apr 23 13:10:14.853: INFO: Waiting for observed generation 2 Apr 23 13:10:16.860: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 23 13:10:16.863: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 23 13:10:16.865: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 23 13:10:16.871: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 23 13:10:16.871: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 23 13:10:16.874: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 23 13:10:16.878: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Apr 23 13:10:16.878: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Apr 23 13:10:16.884: INFO: Updating deployment nginx-deployment Apr 23 13:10:16.884: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Apr 23 13:10:16.919: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 23 13:10:17.028: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 23 13:10:17.124: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-9165,SelfLink:/apis/apps/v1/namespaces/deployment-9165/deployments/nginx-deployment,UID:0e857e86-b17e-4a86-80ce-041a462d5d49,ResourceVersion:6996654,Generation:3,CreationTimestamp:2020-04-23 13:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-04-23 13:10:15 +0000 UTC 2020-04-23 13:10:04 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-04-23 13:10:16 +0000 UTC 2020-04-23 13:10:16 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Apr 23 13:10:17.278: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-9165,SelfLink:/apis/apps/v1/namespaces/deployment-9165/replicasets/nginx-deployment-55fb7cb77f,UID:2d87ba5e-2c0c-4730-b8f7-e6b319fb24e1,ResourceVersion:6996690,Generation:3,CreationTimestamp:2020-04-23 13:10:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0e857e86-b17e-4a86-80ce-041a462d5d49 0xc002a2d377 0xc002a2d378}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 23 13:10:17.278: INFO: All old ReplicaSets of Deployment "nginx-deployment": Apr 23 13:10:17.278: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-9165,SelfLink:/apis/apps/v1/namespaces/deployment-9165/replicasets/nginx-deployment-7b8c6f4498,UID:8f3e5ced-c954-4964-a7bc-4059143c22f2,ResourceVersion:6996687,Generation:3,CreationTimestamp:2020-04-23 13:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 0e857e86-b17e-4a86-80ce-041a462d5d49 0xc002a2d557 0xc002a2d558}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Apr 23 13:10:17.335: INFO: Pod "nginx-deployment-55fb7cb77f-2h2ch" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2h2ch,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-55fb7cb77f-2h2ch,UID:eb4aec9c-f667-4ac9-a87f-cc6b0c28b55f,ResourceVersion:6996616,Generation:0,CreationTimestamp:2020-04-23 13:10:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2d87ba5e-2c0c-4730-b8f7-e6b319fb24e1 0xc002a2deb7 0xc002a2deb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a2df30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a2df50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-23 13:10:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.336: INFO: Pod "nginx-deployment-55fb7cb77f-2wc8z" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-2wc8z,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-55fb7cb77f-2wc8z,UID:14f4d9ad-82df-47e3-bd6f-cdc4abb78ad5,ResourceVersion:6996681,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2d87ba5e-2c0c-4730-b8f7-e6b319fb24e1 0xc002e8c020 0xc002e8c021}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8c0a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8c0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.336: INFO: Pod "nginx-deployment-55fb7cb77f-5cm9p" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5cm9p,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-55fb7cb77f-5cm9p,UID:9f5fc800-864f-4203-841e-33360af6b255,ResourceVersion:6996647,Generation:0,CreationTimestamp:2020-04-23 13:10:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2d87ba5e-2c0c-4730-b8f7-e6b319fb24e1 0xc002e8c147 0xc002e8c148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8c1c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8c1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.336: INFO: Pod "nginx-deployment-55fb7cb77f-67fh4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-67fh4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-55fb7cb77f-67fh4,UID:4f642695-23fb-4012-a6f0-ce1632e17b4e,ResourceVersion:6996682,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2d87ba5e-2c0c-4730-b8f7-e6b319fb24e1 0xc002e8c267 0xc002e8c268}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8c2e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8c300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.336: INFO: Pod "nginx-deployment-55fb7cb77f-7v269" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-7v269,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-55fb7cb77f-7v269,UID:54385a4b-ff7d-42e2-8b87-fe57162c4453,ResourceVersion:6996693,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2d87ba5e-2c0c-4730-b8f7-e6b319fb24e1 0xc002e8c387 0xc002e8c388}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8c400} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8c420}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.336: INFO: Pod "nginx-deployment-55fb7cb77f-b7d7n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b7d7n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-55fb7cb77f-b7d7n,UID:8f5d8af0-4d40-4df5-ad47-e59bd148fd57,ResourceVersion:6996608,Generation:0,CreationTimestamp:2020-04-23 13:10:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2d87ba5e-2c0c-4730-b8f7-e6b319fb24e1 0xc002e8c4a7 0xc002e8c4a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8c540} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8c560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-23 13:10:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.336: INFO: Pod "nginx-deployment-55fb7cb77f-bbzcb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-bbzcb,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-55fb7cb77f-bbzcb,UID:3ffb6f80-0609-43c8-a210-d7ac0b8d1acb,ResourceVersion:6996668,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2d87ba5e-2c0c-4730-b8f7-e6b319fb24e1 0xc002e8c630 0xc002e8c631}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8c6b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8c6d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.337: INFO: Pod "nginx-deployment-55fb7cb77f-cvh7f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-cvh7f,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-55fb7cb77f-cvh7f,UID:3b8a3c32-6670-4476-9abf-30c12d50415d,ResourceVersion:6996689,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2d87ba5e-2c0c-4730-b8f7-e6b319fb24e1 0xc002e8c757 0xc002e8c758}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8c7d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8c7f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.337: INFO: Pod "nginx-deployment-55fb7cb77f-fbgn9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fbgn9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-55fb7cb77f-fbgn9,UID:95c5b8e9-d71b-416e-ae09-470c20e6d68f,ResourceVersion:6996656,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2d87ba5e-2c0c-4730-b8f7-e6b319fb24e1 0xc002e8c877 0xc002e8c878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8c8f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8c910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.337: INFO: Pod "nginx-deployment-55fb7cb77f-hg229" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hg229,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-55fb7cb77f-hg229,UID:f166d217-3ece-48f4-82b6-6062c6beefa5,ResourceVersion:6996602,Generation:0,CreationTimestamp:2020-04-23 13:10:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2d87ba5e-2c0c-4730-b8f7-e6b319fb24e1 0xc002e8c997 0xc002e8c998}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8ca10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8ca30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-23 13:10:14 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.337: INFO: Pod "nginx-deployment-55fb7cb77f-p4gqm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-p4gqm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-55fb7cb77f-p4gqm,UID:c17c3b84-1356-4dcd-9fe7-9158f797da03,ResourceVersion:6996621,Generation:0,CreationTimestamp:2020-04-23 13:10:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2d87ba5e-2c0c-4730-b8f7-e6b319fb24e1 0xc002e8cb00 0xc002e8cb01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8cb80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8cba0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:15 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-23 13:10:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.337: INFO: Pod "nginx-deployment-55fb7cb77f-rlfjv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rlfjv,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-55fb7cb77f-rlfjv,UID:1eebda38-f15c-4833-b7dc-432c1c855779,ResourceVersion:6996624,Generation:0,CreationTimestamp:2020-04-23 13:10:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2d87ba5e-2c0c-4730-b8f7-e6b319fb24e1 0xc002e8cc70 0xc002e8cc71}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8ccf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8cd10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:15 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:15 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:15 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-23 13:10:15 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.337: INFO: Pod "nginx-deployment-55fb7cb77f-wr9rt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wr9rt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-55fb7cb77f-wr9rt,UID:47342046-309a-451b-a5c1-fbd2a72f2431,ResourceVersion:6996685,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 2d87ba5e-2c0c-4730-b8f7-e6b319fb24e1 0xc002e8cde0 0xc002e8cde1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8ce60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8ce80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.338: INFO: Pod "nginx-deployment-7b8c6f4498-2cbl4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-2cbl4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-2cbl4,UID:2f632afe-d047-44a3-a72b-68ae489e09a0,ResourceVersion:6996660,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc002e8cf07 0xc002e8cf08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8cf80} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8cfa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.338: INFO: Pod "nginx-deployment-7b8c6f4498-4kqsb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4kqsb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-4kqsb,UID:77490364-0a81-4c2e-8ee9-ce7882f32c21,ResourceVersion:6996662,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc002e8d027 0xc002e8d028}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8d0a0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8d0c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.338: INFO: Pod "nginx-deployment-7b8c6f4498-6n8tn" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6n8tn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-6n8tn,UID:e46ae32f-c481-458d-a603-514512c547e5,ResourceVersion:6996560,Generation:0,CreationTimestamp:2020-04-23 13:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc002e8d147 0xc002e8d148}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8d1c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8d1e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.95,StartTime:2020-04-23 13:10:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-23 13:10:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f524f56fbf483e55498824d861f18fd088df766f4f8d98d374b770f7588af1c8}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.338: INFO: Pod "nginx-deployment-7b8c6f4498-6sj2f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6sj2f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-6sj2f,UID:52678707-c75c-4932-8690-2d33bf5691e1,ResourceVersion:6996672,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc002e8d2b7 0xc002e8d2b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8d330} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8d350}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.338: INFO: Pod "nginx-deployment-7b8c6f4498-76wln" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-76wln,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-76wln,UID:6247403a-c303-4541-9b78-4ea2c0b3d59a,ResourceVersion:6996536,Generation:0,CreationTimestamp:2020-04-23 13:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc002e8d3d7 0xc002e8d3d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8d450} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8d470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.93,StartTime:2020-04-23 13:10:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-23 13:10:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://bb3a99a68137b02b6834cccaa3dfc188faf6143929b7e0e902b203991e10bf85}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.338: INFO: Pod "nginx-deployment-7b8c6f4498-7kp55" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7kp55,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-7kp55,UID:32fe5abd-fbfc-4ec8-876f-5c836e8b9076,ResourceVersion:6996666,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc002e8d547 0xc002e8d548}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8d5c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8d5e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.338: INFO: Pod "nginx-deployment-7b8c6f4498-7zlb2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7zlb2,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-7zlb2,UID:c6878b06-67bc-4443-9005-971ce8a9452a,ResourceVersion:6996649,Generation:0,CreationTimestamp:2020-04-23 13:10:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc002e8d667 0xc002e8d668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8d6e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8d700}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.339: INFO: Pod "nginx-deployment-7b8c6f4498-9svh4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9svh4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-9svh4,UID:c88e8d4c-b1d6-4fef-9e63-eaf894969522,ResourceVersion:6996562,Generation:0,CreationTimestamp:2020-04-23 13:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc002e8d787 0xc002e8d788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8d800} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8d820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.94,StartTime:2020-04-23 13:10:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-23 13:10:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9ffa7a911286a78a02f8fc61ef88debc2ff15b133d49a15593a0351ed0d1b1fd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.339: INFO: Pod "nginx-deployment-7b8c6f4498-djmx9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-djmx9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-djmx9,UID:2d4d3adb-cafd-4955-be89-774de11b80e3,ResourceVersion:6996529,Generation:0,CreationTimestamp:2020-04-23 13:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc002e8d8f7 0xc002e8d8f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8d970} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8d990}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:11 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:11 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.30,StartTime:2020-04-23 13:10:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-23 13:10:10 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://052aade37d6fe361a1c963ccc3fe9829da22bca4799ebcd95072d5ecaf97ce6a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.339: INFO: Pod "nginx-deployment-7b8c6f4498-fl55h" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fl55h,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-fl55h,UID:385df94e-4074-4d57-907f-d710836e1f18,ResourceVersion:6996680,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc002e8da67 0xc002e8da68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8dae0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8db00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.339: INFO: Pod "nginx-deployment-7b8c6f4498-gmfm6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gmfm6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-gmfm6,UID:8261b5fd-b955-4fc3-bff8-2b10d2d03a6c,ResourceVersion:6996573,Generation:0,CreationTimestamp:2020-04-23 13:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc002e8db87 0xc002e8db88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8dc00} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8dc20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.33,StartTime:2020-04-23 13:10:05 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-23 13:10:13 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://2ff53e1987e62273eb4d096bc05e4cc28e2a19f9560a8d96d9b54ba96427adf9}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.339: INFO: Pod "nginx-deployment-7b8c6f4498-hsdcs" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hsdcs,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-hsdcs,UID:55a06bb7-825b-4fac-a147-3a1e6442728f,ResourceVersion:6996677,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc002e8dcf7 0xc002e8dcf8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8dd70} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8dd90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.339: INFO: Pod "nginx-deployment-7b8c6f4498-jp657" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jp657,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-jp657,UID:b679516e-e0df-4cac-a24d-be7faaf25359,ResourceVersion:6996692,Generation:0,CreationTimestamp:2020-04-23 13:10:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc002e8de17 0xc002e8de18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8de90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002e8deb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-23 13:10:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.339: INFO: Pod "nginx-deployment-7b8c6f4498-pdcch" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pdcch,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-pdcch,UID:7756e27d-a599-4dec-876c-3efd1e526f72,ResourceVersion:6996678,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc002e8df77 0xc002e8df78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002e8dff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b6a010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.340: INFO: Pod "nginx-deployment-7b8c6f4498-pn9jb" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pn9jb,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-pn9jb,UID:9c5e1b92-8395-4cf6-b49c-315d2c2c97d9,ResourceVersion:6996659,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc001b6a097 0xc001b6a098}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b6a110} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b6a130}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.340: INFO: Pod "nginx-deployment-7b8c6f4498-rkrp4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rkrp4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-rkrp4,UID:9eab4c7f-a378-4a91-bdff-5c58cd15abdf,ResourceVersion:6996546,Generation:0,CreationTimestamp:2020-04-23 13:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc001b6a1b7 0xc001b6a1b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b6a230} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b6a250}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:13 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:13 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.92,StartTime:2020-04-23 13:10:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-23 13:10:12 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://1954fcad15a4450b071be2c328970b0a1039b7ac163d7f4149a2e4e43bacb78e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.340: INFO: Pod "nginx-deployment-7b8c6f4498-rpzfl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rpzfl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-rpzfl,UID:4fd8bfeb-cdbf-43da-b8dc-b86792be9bc0,ResourceVersion:6996688,Generation:0,CreationTimestamp:2020-04-23 13:10:16 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc001b6a337 0xc001b6a338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b6a3b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b6a3d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:16 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-23 13:10:17 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.340: INFO: Pod "nginx-deployment-7b8c6f4498-t5g7k" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-t5g7k,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-t5g7k,UID:a4e81570-8759-4bb4-bb22-99f9092ed1de,ResourceVersion:6996539,Generation:0,CreationTimestamp:2020-04-23 13:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc001b6a497 0xc001b6a498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b6a510} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b6a530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:12 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:12 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.31,StartTime:2020-04-23 13:10:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-23 13:10:11 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://754876d085d77a955ce800586f09de18c2ba313cc2e1b133398026c0179c4e8d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.340: INFO: Pod "nginx-deployment-7b8c6f4498-vmrx9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vmrx9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-vmrx9,UID:13fcf194-c15e-4d2c-95cd-a058d53a5a42,ResourceVersion:6996513,Generation:0,CreationTimestamp:2020-04-23 13:10:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc001b6a607 0xc001b6a608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b6a680} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b6a6a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:04 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:04 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.91,StartTime:2020-04-23 13:10:04 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-23 13:10:08 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://e0e2ab974b4db39c7ad4d69948b5d6cc905f88161d26b3f525318ccdfcd8d793}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 23 13:10:17.340: INFO: Pod "nginx-deployment-7b8c6f4498-wjfk6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wjfk6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9165,SelfLink:/api/v1/namespaces/deployment-9165/pods/nginx-deployment-7b8c6f4498-wjfk6,UID:07220cbd-5ad3-4cea-bae0-9e6c37bd192d,ResourceVersion:6996679,Generation:0,CreationTimestamp:2020-04-23 13:10:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 8f3e5ced-c954-4964-a7bc-4059143c22f2 0xc001b6a777 0xc001b6a778}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vgqfv {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vgqfv,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vgqfv true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b6a7f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b6a810}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:10:17 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:10:17.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9165" for this suite. Apr 23 13:10:35.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:10:35.635: INFO: namespace deployment-9165 deletion completed in 18.230500852s • [SLOW TEST:30.931 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:10:35.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:10:40.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7948" for this suite. Apr 23 13:11:03.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:11:03.163: INFO: namespace replication-controller-7948 deletion completed in 22.20040862s • [SLOW TEST:27.528 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:11:03.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 13:11:03.225: INFO: Waiting up to 5m0s for pod "downwardapi-volume-67ea00b3-b38d-458f-afa8-cc8c49f7e2fc" in namespace "downward-api-2628" to be "success or failure" Apr 23 13:11:03.229: INFO: Pod "downwardapi-volume-67ea00b3-b38d-458f-afa8-cc8c49f7e2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.796984ms Apr 23 13:11:05.234: INFO: Pod "downwardapi-volume-67ea00b3-b38d-458f-afa8-cc8c49f7e2fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008330612s Apr 23 13:11:07.238: INFO: Pod "downwardapi-volume-67ea00b3-b38d-458f-afa8-cc8c49f7e2fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012351684s STEP: Saw pod success Apr 23 13:11:07.238: INFO: Pod "downwardapi-volume-67ea00b3-b38d-458f-afa8-cc8c49f7e2fc" satisfied condition "success or failure" Apr 23 13:11:07.240: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-67ea00b3-b38d-458f-afa8-cc8c49f7e2fc container client-container: STEP: delete the pod Apr 23 13:11:07.291: INFO: Waiting for pod downwardapi-volume-67ea00b3-b38d-458f-afa8-cc8c49f7e2fc to disappear Apr 23 13:11:07.350: INFO: Pod downwardapi-volume-67ea00b3-b38d-458f-afa8-cc8c49f7e2fc no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:11:07.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2628" for this suite. Apr 23 13:11:13.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:11:13.449: INFO: namespace downward-api-2628 deletion completed in 6.095181919s • [SLOW TEST:10.284 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:11:13.450: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 23 13:11:13.537: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 23 13:11:18.542: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:11:19.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7384" for this suite. Apr 23 13:11:25.678: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:11:25.749: INFO: namespace replication-controller-7384 deletion completed in 6.17390585s • [SLOW TEST:12.299 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:11:25.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-62d2635c-2160-4f0f-8134-1798a324dd61 Apr 23 13:11:25.936: INFO: Pod name my-hostname-basic-62d2635c-2160-4f0f-8134-1798a324dd61: Found 0 pods out of 1 Apr 23 13:11:30.941: INFO: Pod name my-hostname-basic-62d2635c-2160-4f0f-8134-1798a324dd61: Found 1 pods out of 1 Apr 23 13:11:30.941: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-62d2635c-2160-4f0f-8134-1798a324dd61" are running Apr 23 13:11:30.944: INFO: Pod "my-hostname-basic-62d2635c-2160-4f0f-8134-1798a324dd61-6x7mb" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 13:11:25 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 13:11:29 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 13:11:29 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-23 13:11:25 +0000 UTC Reason: Message:}]) Apr 23 13:11:30.944: INFO: Trying to dial the pod Apr 23 13:11:35.955: INFO: Controller my-hostname-basic-62d2635c-2160-4f0f-8134-1798a324dd61: Got expected result from replica 1 [my-hostname-basic-62d2635c-2160-4f0f-8134-1798a324dd61-6x7mb]: "my-hostname-basic-62d2635c-2160-4f0f-8134-1798a324dd61-6x7mb", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:11:35.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7364" for this suite. Apr 23 13:11:41.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:11:42.068: INFO: namespace replication-controller-7364 deletion completed in 6.109457163s • [SLOW TEST:16.318 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:11:42.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 23 13:11:42.106: INFO: Waiting up to 5m0s for pod "downward-api-b0ac4de6-297b-4208-8848-7e9dcfec1957" in namespace "downward-api-8750" to be "success or failure" Apr 23 13:11:42.123: INFO: Pod "downward-api-b0ac4de6-297b-4208-8848-7e9dcfec1957": Phase="Pending", Reason="", readiness=false. Elapsed: 16.559839ms Apr 23 13:11:44.127: INFO: Pod "downward-api-b0ac4de6-297b-4208-8848-7e9dcfec1957": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020467788s Apr 23 13:11:46.131: INFO: Pod "downward-api-b0ac4de6-297b-4208-8848-7e9dcfec1957": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024638684s STEP: Saw pod success Apr 23 13:11:46.131: INFO: Pod "downward-api-b0ac4de6-297b-4208-8848-7e9dcfec1957" satisfied condition "success or failure" Apr 23 13:11:46.134: INFO: Trying to get logs from node iruya-worker2 pod downward-api-b0ac4de6-297b-4208-8848-7e9dcfec1957 container dapi-container: STEP: delete the pod Apr 23 13:11:46.293: INFO: Waiting for pod downward-api-b0ac4de6-297b-4208-8848-7e9dcfec1957 to disappear Apr 23 13:11:46.310: INFO: Pod downward-api-b0ac4de6-297b-4208-8848-7e9dcfec1957 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:11:46.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8750" for this suite. Apr 23 13:11:52.331: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:11:52.402: INFO: namespace downward-api-8750 deletion completed in 6.089048374s • [SLOW TEST:10.334 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:11:52.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-45dc6c88-fb2a-49f2-aa43-ce8f7eaa1ab7 STEP: Creating a pod to test consume configMaps Apr 23 13:11:52.487: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-61d7a09c-673e-4707-86cc-f283318d9753" in namespace "projected-3271" to be "success or failure" Apr 23 13:11:52.490: INFO: Pod "pod-projected-configmaps-61d7a09c-673e-4707-86cc-f283318d9753": Phase="Pending", Reason="", readiness=false. Elapsed: 3.055862ms Apr 23 13:11:54.494: INFO: Pod "pod-projected-configmaps-61d7a09c-673e-4707-86cc-f283318d9753": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007350356s Apr 23 13:11:56.499: INFO: Pod "pod-projected-configmaps-61d7a09c-673e-4707-86cc-f283318d9753": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011881864s STEP: Saw pod success Apr 23 13:11:56.499: INFO: Pod "pod-projected-configmaps-61d7a09c-673e-4707-86cc-f283318d9753" satisfied condition "success or failure" Apr 23 13:11:56.502: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-61d7a09c-673e-4707-86cc-f283318d9753 container projected-configmap-volume-test: STEP: delete the pod Apr 23 13:11:56.653: INFO: Waiting for pod pod-projected-configmaps-61d7a09c-673e-4707-86cc-f283318d9753 to disappear Apr 23 13:11:56.748: INFO: Pod pod-projected-configmaps-61d7a09c-673e-4707-86cc-f283318d9753 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:11:56.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3271" for this suite. Apr 23 13:12:02.868: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:12:02.948: INFO: namespace projected-3271 deletion completed in 6.108906348s • [SLOW TEST:10.546 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:12:02.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-31ba5251-bf6a-43d2-af1a-8513c33d5d0c STEP: Creating a pod to test consume secrets Apr 23 13:12:03.040: INFO: Waiting up to 5m0s for pod "pod-secrets-58d0d62e-c30e-4715-856c-a830dd35960e" in namespace "secrets-4266" to be "success or failure" Apr 23 13:12:03.045: INFO: Pod "pod-secrets-58d0d62e-c30e-4715-856c-a830dd35960e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.610713ms Apr 23 13:12:05.048: INFO: Pod "pod-secrets-58d0d62e-c30e-4715-856c-a830dd35960e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008289834s Apr 23 13:12:07.053: INFO: Pod "pod-secrets-58d0d62e-c30e-4715-856c-a830dd35960e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013009414s STEP: Saw pod success Apr 23 13:12:07.053: INFO: Pod "pod-secrets-58d0d62e-c30e-4715-856c-a830dd35960e" satisfied condition "success or failure" Apr 23 13:12:07.056: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-58d0d62e-c30e-4715-856c-a830dd35960e container secret-volume-test: STEP: delete the pod Apr 23 13:12:07.070: INFO: Waiting for pod pod-secrets-58d0d62e-c30e-4715-856c-a830dd35960e to disappear Apr 23 13:12:07.093: INFO: Pod pod-secrets-58d0d62e-c30e-4715-856c-a830dd35960e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:12:07.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4266" for this suite. Apr 23 13:12:13.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:12:13.184: INFO: namespace secrets-4266 deletion completed in 6.087658451s • [SLOW TEST:10.235 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:12:13.184: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 23 13:12:13.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7599' Apr 23 13:12:15.622: INFO: stderr: "" Apr 23 13:12:15.622: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 23 13:12:15.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7599' Apr 23 13:12:15.723: INFO: stderr: "" Apr 23 13:12:15.723: INFO: stdout: "update-demo-nautilus-4tj7x update-demo-nautilus-vc665 " Apr 23 13:12:15.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4tj7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7599' Apr 23 13:12:15.815: INFO: stderr: "" Apr 23 13:12:15.815: INFO: stdout: "" Apr 23 13:12:15.815: INFO: update-demo-nautilus-4tj7x is created but not running Apr 23 13:12:20.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7599' Apr 23 13:12:20.916: INFO: stderr: "" Apr 23 13:12:20.916: INFO: stdout: "update-demo-nautilus-4tj7x update-demo-nautilus-vc665 " Apr 23 13:12:20.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4tj7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7599' Apr 23 13:12:21.003: INFO: stderr: "" Apr 23 13:12:21.003: INFO: stdout: "true" Apr 23 13:12:21.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4tj7x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7599' Apr 23 13:12:21.087: INFO: stderr: "" Apr 23 13:12:21.087: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 13:12:21.087: INFO: validating pod update-demo-nautilus-4tj7x Apr 23 13:12:21.091: INFO: got data: { "image": "nautilus.jpg" } Apr 23 13:12:21.091: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 13:12:21.091: INFO: update-demo-nautilus-4tj7x is verified up and running Apr 23 13:12:21.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vc665 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7599' Apr 23 13:12:21.181: INFO: stderr: "" Apr 23 13:12:21.181: INFO: stdout: "true" Apr 23 13:12:21.181: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vc665 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7599' Apr 23 13:12:21.267: INFO: stderr: "" Apr 23 13:12:21.267: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 13:12:21.267: INFO: validating pod update-demo-nautilus-vc665 Apr 23 13:12:21.270: INFO: got data: { "image": "nautilus.jpg" } Apr 23 13:12:21.270: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 13:12:21.270: INFO: update-demo-nautilus-vc665 is verified up and running STEP: scaling down the replication controller Apr 23 13:12:21.272: INFO: scanned /root for discovery docs: Apr 23 13:12:21.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-7599' Apr 23 13:12:22.442: INFO: stderr: "" Apr 23 13:12:22.442: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 23 13:12:22.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7599' Apr 23 13:12:22.540: INFO: stderr: "" Apr 23 13:12:22.540: INFO: stdout: "update-demo-nautilus-4tj7x update-demo-nautilus-vc665 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 23 13:12:27.540: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7599' Apr 23 13:12:27.644: INFO: stderr: "" Apr 23 13:12:27.644: INFO: stdout: "update-demo-nautilus-4tj7x update-demo-nautilus-vc665 " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 23 13:12:32.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7599' Apr 23 13:12:32.749: INFO: stderr: "" Apr 23 13:12:32.749: INFO: stdout: "update-demo-nautilus-4tj7x " Apr 23 13:12:32.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4tj7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7599' Apr 23 13:12:32.844: INFO: stderr: "" Apr 23 13:12:32.844: INFO: stdout: "true" Apr 23 13:12:32.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4tj7x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7599' Apr 23 13:12:32.943: INFO: stderr: "" Apr 23 13:12:32.943: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 13:12:32.943: INFO: validating pod update-demo-nautilus-4tj7x Apr 23 13:12:32.946: INFO: got data: { "image": "nautilus.jpg" } Apr 23 13:12:32.946: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 13:12:32.946: INFO: update-demo-nautilus-4tj7x is verified up and running STEP: scaling up the replication controller Apr 23 13:12:32.948: INFO: scanned /root for discovery docs: Apr 23 13:12:32.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-7599' Apr 23 13:12:34.111: INFO: stderr: "" Apr 23 13:12:34.112: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 23 13:12:34.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7599' Apr 23 13:12:34.207: INFO: stderr: "" Apr 23 13:12:34.207: INFO: stdout: "update-demo-nautilus-4tj7x update-demo-nautilus-8qxxb " Apr 23 13:12:34.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4tj7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7599' Apr 23 13:12:34.298: INFO: stderr: "" Apr 23 13:12:34.298: INFO: stdout: "true" Apr 23 13:12:34.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4tj7x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7599' Apr 23 13:12:34.396: INFO: stderr: "" Apr 23 13:12:34.396: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 13:12:34.396: INFO: validating pod update-demo-nautilus-4tj7x Apr 23 13:12:34.399: INFO: got data: { "image": "nautilus.jpg" } Apr 23 13:12:34.399: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 13:12:34.399: INFO: update-demo-nautilus-4tj7x is verified up and running Apr 23 13:12:34.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qxxb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7599' Apr 23 13:12:34.487: INFO: stderr: "" Apr 23 13:12:34.487: INFO: stdout: "" Apr 23 13:12:34.487: INFO: update-demo-nautilus-8qxxb is created but not running Apr 23 13:12:39.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7599' Apr 23 13:12:39.580: INFO: stderr: "" Apr 23 13:12:39.581: INFO: stdout: "update-demo-nautilus-4tj7x update-demo-nautilus-8qxxb " Apr 23 13:12:39.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4tj7x -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7599' Apr 23 13:12:39.674: INFO: stderr: "" Apr 23 13:12:39.674: INFO: stdout: "true" Apr 23 13:12:39.674: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4tj7x -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7599' Apr 23 13:12:39.760: INFO: stderr: "" Apr 23 13:12:39.760: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 13:12:39.760: INFO: validating pod update-demo-nautilus-4tj7x Apr 23 13:12:39.764: INFO: got data: { "image": "nautilus.jpg" } Apr 23 13:12:39.764: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 13:12:39.764: INFO: update-demo-nautilus-4tj7x is verified up and running Apr 23 13:12:39.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qxxb -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7599' Apr 23 13:12:39.864: INFO: stderr: "" Apr 23 13:12:39.864: INFO: stdout: "true" Apr 23 13:12:39.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-8qxxb -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7599' Apr 23 13:12:39.964: INFO: stderr: "" Apr 23 13:12:39.964: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 13:12:39.964: INFO: validating pod update-demo-nautilus-8qxxb Apr 23 13:12:39.967: INFO: got data: { "image": "nautilus.jpg" } Apr 23 13:12:39.967: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 13:12:39.967: INFO: update-demo-nautilus-8qxxb is verified up and running STEP: using delete to clean up resources Apr 23 13:12:39.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7599' Apr 23 13:12:40.070: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 13:12:40.070: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 23 13:12:40.071: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-7599' Apr 23 13:12:40.173: INFO: stderr: "No resources found.\n" Apr 23 13:12:40.173: INFO: stdout: "" Apr 23 13:12:40.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-7599 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 23 13:12:40.299: INFO: stderr: "" Apr 23 13:12:40.299: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:12:40.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7599" for this suite. Apr 23 13:13:02.333: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:13:02.414: INFO: namespace kubectl-7599 deletion completed in 22.095595773s • [SLOW TEST:49.230 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:13:02.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-00565f1c-1ec0-472b-ac70-f7c0bd08d6dc STEP: Creating a pod to test consume configMaps Apr 23 13:13:02.478: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-c1fe859b-ec54-4ed0-8bd6-4ba87c231931" in namespace "projected-3067" to be "success or failure" Apr 23 13:13:02.526: INFO: Pod "pod-projected-configmaps-c1fe859b-ec54-4ed0-8bd6-4ba87c231931": Phase="Pending", Reason="", readiness=false. Elapsed: 47.386183ms Apr 23 13:13:04.530: INFO: Pod "pod-projected-configmaps-c1fe859b-ec54-4ed0-8bd6-4ba87c231931": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051749815s Apr 23 13:13:06.534: INFO: Pod "pod-projected-configmaps-c1fe859b-ec54-4ed0-8bd6-4ba87c231931": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056169778s STEP: Saw pod success Apr 23 13:13:06.534: INFO: Pod "pod-projected-configmaps-c1fe859b-ec54-4ed0-8bd6-4ba87c231931" satisfied condition "success or failure" Apr 23 13:13:06.538: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-c1fe859b-ec54-4ed0-8bd6-4ba87c231931 container projected-configmap-volume-test: STEP: delete the pod Apr 23 13:13:06.599: INFO: Waiting for pod pod-projected-configmaps-c1fe859b-ec54-4ed0-8bd6-4ba87c231931 to disappear Apr 23 13:13:06.606: INFO: Pod pod-projected-configmaps-c1fe859b-ec54-4ed0-8bd6-4ba87c231931 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:13:06.606: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3067" for this suite. Apr 23 13:13:12.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:13:12.715: INFO: namespace projected-3067 deletion completed in 6.106035545s • [SLOW TEST:10.301 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:13:12.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 23 13:13:12.769: INFO: Waiting up to 5m0s for pod "pod-97a8eb51-fb65-461d-8ace-32ff1e48260d" in namespace "emptydir-9748" to be "success or failure" Apr 23 13:13:12.773: INFO: Pod "pod-97a8eb51-fb65-461d-8ace-32ff1e48260d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.605679ms Apr 23 13:13:14.777: INFO: Pod "pod-97a8eb51-fb65-461d-8ace-32ff1e48260d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007669154s Apr 23 13:13:16.781: INFO: Pod "pod-97a8eb51-fb65-461d-8ace-32ff1e48260d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01168715s STEP: Saw pod success Apr 23 13:13:16.781: INFO: Pod "pod-97a8eb51-fb65-461d-8ace-32ff1e48260d" satisfied condition "success or failure" Apr 23 13:13:16.784: INFO: Trying to get logs from node iruya-worker pod pod-97a8eb51-fb65-461d-8ace-32ff1e48260d container test-container: STEP: delete the pod Apr 23 13:13:16.822: INFO: Waiting for pod pod-97a8eb51-fb65-461d-8ace-32ff1e48260d to disappear Apr 23 13:13:16.833: INFO: Pod pod-97a8eb51-fb65-461d-8ace-32ff1e48260d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:13:16.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9748" for this suite. Apr 23 13:13:22.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:13:22.942: INFO: namespace emptydir-9748 deletion completed in 6.105613603s • [SLOW TEST:10.227 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:13:22.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:13:23.375: INFO: Create a RollingUpdate DaemonSet Apr 23 13:13:23.378: INFO: Check that daemon pods launch on every node of the cluster Apr 23 13:13:23.391: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:13:23.396: INFO: Number of nodes with available pods: 0 Apr 23 13:13:23.396: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:13:24.401: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:13:24.405: INFO: Number of nodes with available pods: 0 Apr 23 13:13:24.405: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:13:25.485: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:13:25.560: INFO: Number of nodes with available pods: 0 Apr 23 13:13:25.560: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:13:26.406: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:13:26.409: INFO: Number of nodes with available pods: 0 Apr 23 13:13:26.409: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:13:27.406: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:13:27.409: INFO: Number of nodes with available pods: 2 Apr 23 13:13:27.409: INFO: Number of running nodes: 2, number of available pods: 2 Apr 23 13:13:27.409: INFO: Update the DaemonSet to trigger a rollout Apr 23 13:13:27.416: INFO: Updating DaemonSet daemon-set Apr 23 13:13:42.435: INFO: Roll back the DaemonSet before rollout is complete Apr 23 13:13:42.442: INFO: Updating DaemonSet daemon-set Apr 23 13:13:42.442: INFO: Make sure DaemonSet rollback is complete Apr 23 13:13:42.448: INFO: Wrong image for pod: daemon-set-2w2h5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 23 13:13:42.448: INFO: Pod daemon-set-2w2h5 is not available Apr 23 13:13:42.454: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:13:43.458: INFO: Wrong image for pod: daemon-set-2w2h5. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 23 13:13:43.458: INFO: Pod daemon-set-2w2h5 is not available Apr 23 13:13:43.462: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:13:44.497: INFO: Pod daemon-set-wqxz5 is not available Apr 23 13:13:44.501: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6118, will wait for the garbage collector to delete the pods Apr 23 13:13:44.567: INFO: Deleting DaemonSet.extensions daemon-set took: 7.25823ms Apr 23 13:13:44.867: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.21363ms Apr 23 13:13:52.274: INFO: Number of nodes with available pods: 0 Apr 23 13:13:52.274: INFO: Number of running nodes: 0, number of available pods: 0 Apr 23 13:13:52.278: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6118/daemonsets","resourceVersion":"6997749"},"items":null} Apr 23 13:13:52.280: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6118/pods","resourceVersion":"6997749"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:13:52.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6118" for this suite. Apr 23 13:13:58.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:13:58.390: INFO: namespace daemonsets-6118 deletion completed in 6.097239611s • [SLOW TEST:35.447 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:13:58.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 23 13:13:58.440: INFO: namespace kubectl-5771 Apr 23 13:13:58.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5771' Apr 23 13:13:58.730: INFO: stderr: "" Apr 23 13:13:58.730: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 23 13:13:59.735: INFO: Selector matched 1 pods for map[app:redis] Apr 23 13:13:59.735: INFO: Found 0 / 1 Apr 23 13:14:00.735: INFO: Selector matched 1 pods for map[app:redis] Apr 23 13:14:00.735: INFO: Found 0 / 1 Apr 23 13:14:01.735: INFO: Selector matched 1 pods for map[app:redis] Apr 23 13:14:01.735: INFO: Found 1 / 1 Apr 23 13:14:01.735: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 23 13:14:01.738: INFO: Selector matched 1 pods for map[app:redis] Apr 23 13:14:01.738: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 23 13:14:01.738: INFO: wait on redis-master startup in kubectl-5771 Apr 23 13:14:01.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-c5k6l redis-master --namespace=kubectl-5771' Apr 23 13:14:01.848: INFO: stderr: "" Apr 23 13:14:01.848: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Apr 13:14:01.094 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Apr 13:14:01.094 # Server started, Redis version 3.2.12\n1:M 23 Apr 13:14:01.094 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Apr 13:14:01.094 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Apr 23 13:14:01.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-5771' Apr 23 13:14:02.001: INFO: stderr: "" Apr 23 13:14:02.001: INFO: stdout: "service/rm2 exposed\n" Apr 23 13:14:02.008: INFO: Service rm2 in namespace kubectl-5771 found. STEP: exposing service Apr 23 13:14:04.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-5771' Apr 23 13:14:04.150: INFO: stderr: "" Apr 23 13:14:04.150: INFO: stdout: "service/rm3 exposed\n" Apr 23 13:14:04.166: INFO: Service rm3 in namespace kubectl-5771 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:14:06.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5771" for this suite. Apr 23 13:14:28.194: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:14:28.271: INFO: namespace kubectl-5771 deletion completed in 22.088150769s • [SLOW TEST:29.880 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:14:28.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-151 I0423 13:14:28.339234 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-151, replica count: 1 I0423 13:14:29.389673 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 13:14:30.389891 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 13:14:31.390130 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 13:14:32.390339 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 23 13:14:32.533: INFO: Created: latency-svc-5l7g6 Apr 23 13:14:32.549: INFO: Got endpoints: latency-svc-5l7g6 [58.503747ms] Apr 23 13:14:32.588: INFO: Created: latency-svc-5kr2q Apr 23 13:14:32.635: INFO: Got endpoints: latency-svc-5kr2q [85.640826ms] Apr 23 13:14:32.637: INFO: Created: latency-svc-8lqnb Apr 23 13:14:32.651: INFO: Got endpoints: latency-svc-8lqnb [102.068168ms] Apr 23 13:14:32.677: INFO: Created: latency-svc-mnlwd Apr 23 13:14:32.693: INFO: Got endpoints: latency-svc-mnlwd [144.385099ms] Apr 23 13:14:32.719: INFO: Created: latency-svc-sdkfd Apr 23 13:14:32.760: INFO: Got endpoints: latency-svc-sdkfd [211.136453ms] Apr 23 13:14:32.773: INFO: Created: latency-svc-mh8vw Apr 23 13:14:32.790: INFO: Got endpoints: latency-svc-mh8vw [240.663075ms] Apr 23 13:14:32.816: INFO: Created: latency-svc-k47l2 Apr 23 13:14:32.832: INFO: Got endpoints: latency-svc-k47l2 [282.8179ms] Apr 23 13:14:32.910: INFO: Created: latency-svc-skw7v Apr 23 13:14:32.916: INFO: Got endpoints: latency-svc-skw7v [367.23828ms] Apr 23 13:14:32.940: INFO: Created: latency-svc-fgcxs Apr 23 13:14:32.959: INFO: Got endpoints: latency-svc-fgcxs [409.403006ms] Apr 23 13:14:32.983: INFO: Created: latency-svc-vm96w Apr 23 13:14:33.001: INFO: Got endpoints: latency-svc-vm96w [451.73315ms] Apr 23 13:14:33.084: INFO: Created: latency-svc-zs5zj Apr 23 13:14:33.092: INFO: Got endpoints: latency-svc-zs5zj [542.852074ms] Apr 23 13:14:33.110: INFO: Created: latency-svc-q5mkp Apr 23 13:14:33.140: INFO: Got endpoints: latency-svc-q5mkp [590.285909ms] Apr 23 13:14:33.168: INFO: Created: latency-svc-dh28f Apr 23 13:14:33.230: INFO: Got endpoints: latency-svc-dh28f [680.667073ms] Apr 23 13:14:33.265: INFO: Created: latency-svc-994pw Apr 23 13:14:33.278: INFO: Got endpoints: latency-svc-994pw [728.935698ms] Apr 23 13:14:33.296: INFO: Created: latency-svc-zfbz7 Apr 23 13:14:33.314: INFO: Got endpoints: latency-svc-zfbz7 [765.000196ms] Apr 23 13:14:33.359: INFO: Created: latency-svc-8dgk7 Apr 23 13:14:33.368: INFO: Got endpoints: latency-svc-8dgk7 [818.409527ms] Apr 23 13:14:33.391: INFO: Created: latency-svc-djx7n Apr 23 13:14:33.404: INFO: Got endpoints: latency-svc-djx7n [769.394225ms] Apr 23 13:14:33.426: INFO: Created: latency-svc-j7v45 Apr 23 13:14:33.440: INFO: Got endpoints: latency-svc-j7v45 [789.068181ms] Apr 23 13:14:33.511: INFO: Created: latency-svc-cblvb Apr 23 13:14:33.511: INFO: Got endpoints: latency-svc-cblvb [817.872656ms] Apr 23 13:14:33.542: INFO: Created: latency-svc-lqtng Apr 23 13:14:33.555: INFO: Got endpoints: latency-svc-lqtng [794.531767ms] Apr 23 13:14:33.578: INFO: Created: latency-svc-d8wzp Apr 23 13:14:33.591: INFO: Got endpoints: latency-svc-d8wzp [801.536168ms] Apr 23 13:14:33.643: INFO: Created: latency-svc-sss4s Apr 23 13:14:33.676: INFO: Got endpoints: latency-svc-sss4s [843.659031ms] Apr 23 13:14:33.697: INFO: Created: latency-svc-4hsm2 Apr 23 13:14:33.708: INFO: Got endpoints: latency-svc-4hsm2 [791.921399ms] Apr 23 13:14:33.733: INFO: Created: latency-svc-xxxtt Apr 23 13:14:33.766: INFO: Got endpoints: latency-svc-xxxtt [807.62616ms] Apr 23 13:14:33.775: INFO: Created: latency-svc-xz7v8 Apr 23 13:14:33.793: INFO: Got endpoints: latency-svc-xz7v8 [791.581491ms] Apr 23 13:14:33.811: INFO: Created: latency-svc-q99m4 Apr 23 13:14:33.829: INFO: Got endpoints: latency-svc-q99m4 [736.967397ms] Apr 23 13:14:33.846: INFO: Created: latency-svc-xp9pz Apr 23 13:14:33.859: INFO: Got endpoints: latency-svc-xp9pz [719.514679ms] Apr 23 13:14:33.907: INFO: Created: latency-svc-htz5w Apr 23 13:14:33.919: INFO: Got endpoints: latency-svc-htz5w [689.129721ms] Apr 23 13:14:33.943: INFO: Created: latency-svc-7glrj Apr 23 13:14:33.955: INFO: Got endpoints: latency-svc-7glrj [677.181964ms] Apr 23 13:14:33.998: INFO: Created: latency-svc-brhmz Apr 23 13:14:34.111: INFO: Created: latency-svc-wz4t2 Apr 23 13:14:34.118: INFO: Got endpoints: latency-svc-wz4t2 [749.968312ms] Apr 23 13:14:34.118: INFO: Got endpoints: latency-svc-brhmz [803.724174ms] Apr 23 13:14:34.159: INFO: Created: latency-svc-d8tp7 Apr 23 13:14:34.184: INFO: Got endpoints: latency-svc-d8tp7 [779.908872ms] Apr 23 13:14:34.276: INFO: Created: latency-svc-n2lnb Apr 23 13:14:34.280: INFO: Got endpoints: latency-svc-n2lnb [839.778453ms] Apr 23 13:14:34.316: INFO: Created: latency-svc-t6tr8 Apr 23 13:14:34.328: INFO: Got endpoints: latency-svc-t6tr8 [816.936723ms] Apr 23 13:14:34.352: INFO: Created: latency-svc-9p55x Apr 23 13:14:34.365: INFO: Got endpoints: latency-svc-9p55x [809.660253ms] Apr 23 13:14:34.419: INFO: Created: latency-svc-n4zwk Apr 23 13:14:34.423: INFO: Got endpoints: latency-svc-n4zwk [831.546802ms] Apr 23 13:14:34.465: INFO: Created: latency-svc-s9ppk Apr 23 13:14:34.479: INFO: Got endpoints: latency-svc-s9ppk [803.21293ms] Apr 23 13:14:34.501: INFO: Created: latency-svc-rdzst Apr 23 13:14:34.509: INFO: Got endpoints: latency-svc-rdzst [800.969724ms] Apr 23 13:14:34.557: INFO: Created: latency-svc-g77bp Apr 23 13:14:34.574: INFO: Got endpoints: latency-svc-g77bp [808.028476ms] Apr 23 13:14:34.616: INFO: Created: latency-svc-xhwfj Apr 23 13:14:34.630: INFO: Got endpoints: latency-svc-xhwfj [837.49776ms] Apr 23 13:14:34.657: INFO: Created: latency-svc-xtlsf Apr 23 13:14:34.700: INFO: Got endpoints: latency-svc-xtlsf [871.316643ms] Apr 23 13:14:34.716: INFO: Created: latency-svc-ddcs8 Apr 23 13:14:34.726: INFO: Got endpoints: latency-svc-ddcs8 [866.965879ms] Apr 23 13:14:34.748: INFO: Created: latency-svc-z2kzr Apr 23 13:14:34.763: INFO: Got endpoints: latency-svc-z2kzr [843.761985ms] Apr 23 13:14:34.796: INFO: Created: latency-svc-pwvrv Apr 23 13:14:34.862: INFO: Got endpoints: latency-svc-pwvrv [906.777195ms] Apr 23 13:14:34.865: INFO: Created: latency-svc-ms29k Apr 23 13:14:34.871: INFO: Got endpoints: latency-svc-ms29k [753.002265ms] Apr 23 13:14:34.921: INFO: Created: latency-svc-h5tx9 Apr 23 13:14:34.925: INFO: Got endpoints: latency-svc-h5tx9 [807.635145ms] Apr 23 13:14:34.953: INFO: Created: latency-svc-2kshh Apr 23 13:14:34.961: INFO: Got endpoints: latency-svc-2kshh [777.370936ms] Apr 23 13:14:35.035: INFO: Created: latency-svc-4mg85 Apr 23 13:14:35.041: INFO: Got endpoints: latency-svc-4mg85 [761.028592ms] Apr 23 13:14:35.066: INFO: Created: latency-svc-qvxvx Apr 23 13:14:35.076: INFO: Got endpoints: latency-svc-qvxvx [747.383171ms] Apr 23 13:14:35.102: INFO: Created: latency-svc-5brsn Apr 23 13:14:35.167: INFO: Got endpoints: latency-svc-5brsn [802.59396ms] Apr 23 13:14:35.169: INFO: Created: latency-svc-nmqx4 Apr 23 13:14:35.184: INFO: Got endpoints: latency-svc-nmqx4 [761.39909ms] Apr 23 13:14:35.203: INFO: Created: latency-svc-nmjp2 Apr 23 13:14:35.214: INFO: Got endpoints: latency-svc-nmjp2 [735.478123ms] Apr 23 13:14:35.253: INFO: Created: latency-svc-2dk75 Apr 23 13:14:35.312: INFO: Got endpoints: latency-svc-2dk75 [802.021643ms] Apr 23 13:14:35.318: INFO: Created: latency-svc-rgnc7 Apr 23 13:14:35.336: INFO: Got endpoints: latency-svc-rgnc7 [761.493268ms] Apr 23 13:14:35.358: INFO: Created: latency-svc-q8qdl Apr 23 13:14:35.383: INFO: Got endpoints: latency-svc-q8qdl [752.847769ms] Apr 23 13:14:35.437: INFO: Created: latency-svc-h7t7w Apr 23 13:14:35.440: INFO: Got endpoints: latency-svc-h7t7w [739.890593ms] Apr 23 13:14:35.486: INFO: Created: latency-svc-v2ljs Apr 23 13:14:35.505: INFO: Got endpoints: latency-svc-v2ljs [778.25135ms] Apr 23 13:14:35.528: INFO: Created: latency-svc-gg6gv Apr 23 13:14:35.604: INFO: Got endpoints: latency-svc-gg6gv [841.604014ms] Apr 23 13:14:35.606: INFO: Created: latency-svc-xxprx Apr 23 13:14:35.612: INFO: Got endpoints: latency-svc-xxprx [749.712598ms] Apr 23 13:14:35.670: INFO: Created: latency-svc-7tw2z Apr 23 13:14:35.685: INFO: Got endpoints: latency-svc-7tw2z [813.936497ms] Apr 23 13:14:35.743: INFO: Created: latency-svc-gw6m4 Apr 23 13:14:35.745: INFO: Got endpoints: latency-svc-gw6m4 [819.984735ms] Apr 23 13:14:35.774: INFO: Created: latency-svc-dsl7v Apr 23 13:14:35.805: INFO: Got endpoints: latency-svc-dsl7v [843.416533ms] Apr 23 13:14:35.832: INFO: Created: latency-svc-vxctj Apr 23 13:14:35.842: INFO: Got endpoints: latency-svc-vxctj [800.494264ms] Apr 23 13:14:35.893: INFO: Created: latency-svc-tvgd9 Apr 23 13:14:35.905: INFO: Got endpoints: latency-svc-tvgd9 [829.210171ms] Apr 23 13:14:35.938: INFO: Created: latency-svc-qrb6f Apr 23 13:14:35.950: INFO: Got endpoints: latency-svc-qrb6f [782.543132ms] Apr 23 13:14:35.972: INFO: Created: latency-svc-xgwjv Apr 23 13:14:35.986: INFO: Got endpoints: latency-svc-xgwjv [801.374848ms] Apr 23 13:14:36.030: INFO: Created: latency-svc-p29hc Apr 23 13:14:36.033: INFO: Got endpoints: latency-svc-p29hc [818.948153ms] Apr 23 13:14:36.085: INFO: Created: latency-svc-j584g Apr 23 13:14:36.094: INFO: Got endpoints: latency-svc-j584g [782.699351ms] Apr 23 13:14:36.122: INFO: Created: latency-svc-cs949 Apr 23 13:14:36.161: INFO: Got endpoints: latency-svc-cs949 [825.337426ms] Apr 23 13:14:36.175: INFO: Created: latency-svc-pvnqt Apr 23 13:14:36.192: INFO: Got endpoints: latency-svc-pvnqt [808.791407ms] Apr 23 13:14:36.222: INFO: Created: latency-svc-zmfch Apr 23 13:14:36.237: INFO: Got endpoints: latency-svc-zmfch [796.267126ms] Apr 23 13:14:36.259: INFO: Created: latency-svc-pvqkt Apr 23 13:14:36.282: INFO: Got endpoints: latency-svc-pvqkt [776.993897ms] Apr 23 13:14:36.301: INFO: Created: latency-svc-8nwln Apr 23 13:14:36.311: INFO: Got endpoints: latency-svc-8nwln [706.970264ms] Apr 23 13:14:36.338: INFO: Created: latency-svc-qqrdl Apr 23 13:14:36.354: INFO: Got endpoints: latency-svc-qqrdl [742.243742ms] Apr 23 13:14:36.379: INFO: Created: latency-svc-s67tt Apr 23 13:14:36.413: INFO: Got endpoints: latency-svc-s67tt [728.349203ms] Apr 23 13:14:36.428: INFO: Created: latency-svc-dvw6x Apr 23 13:14:36.456: INFO: Got endpoints: latency-svc-dvw6x [710.591359ms] Apr 23 13:14:36.486: INFO: Created: latency-svc-nts65 Apr 23 13:14:36.511: INFO: Got endpoints: latency-svc-nts65 [706.069229ms] Apr 23 13:14:36.557: INFO: Created: latency-svc-vsj2p Apr 23 13:14:36.571: INFO: Got endpoints: latency-svc-vsj2p [729.232609ms] Apr 23 13:14:36.596: INFO: Created: latency-svc-vvl7t Apr 23 13:14:36.614: INFO: Got endpoints: latency-svc-vvl7t [708.905018ms] Apr 23 13:14:36.637: INFO: Created: latency-svc-5mj25 Apr 23 13:14:36.650: INFO: Got endpoints: latency-svc-5mj25 [700.149461ms] Apr 23 13:14:36.707: INFO: Created: latency-svc-xnbhv Apr 23 13:14:36.720: INFO: Got endpoints: latency-svc-xnbhv [734.401451ms] Apr 23 13:14:36.752: INFO: Created: latency-svc-shh76 Apr 23 13:14:36.765: INFO: Got endpoints: latency-svc-shh76 [731.009831ms] Apr 23 13:14:36.787: INFO: Created: latency-svc-v4f4g Apr 23 13:14:36.800: INFO: Got endpoints: latency-svc-v4f4g [705.925243ms] Apr 23 13:14:36.850: INFO: Created: latency-svc-pqrxp Apr 23 13:14:36.888: INFO: Created: latency-svc-6d2g8 Apr 23 13:14:36.889: INFO: Got endpoints: latency-svc-pqrxp [727.08002ms] Apr 23 13:14:36.912: INFO: Got endpoints: latency-svc-6d2g8 [720.434375ms] Apr 23 13:14:37.016: INFO: Created: latency-svc-vw9sb Apr 23 13:14:37.036: INFO: Got endpoints: latency-svc-vw9sb [799.23678ms] Apr 23 13:14:37.058: INFO: Created: latency-svc-2mkl7 Apr 23 13:14:37.073: INFO: Got endpoints: latency-svc-2mkl7 [790.891094ms] Apr 23 13:14:37.139: INFO: Created: latency-svc-lwhwh Apr 23 13:14:37.145: INFO: Got endpoints: latency-svc-lwhwh [833.015117ms] Apr 23 13:14:37.183: INFO: Created: latency-svc-grgz8 Apr 23 13:14:37.217: INFO: Got endpoints: latency-svc-grgz8 [862.303269ms] Apr 23 13:14:37.323: INFO: Created: latency-svc-ch6rh Apr 23 13:14:37.337: INFO: Got endpoints: latency-svc-ch6rh [923.971125ms] Apr 23 13:14:37.366: INFO: Created: latency-svc-6srps Apr 23 13:14:37.379: INFO: Got endpoints: latency-svc-6srps [923.038774ms] Apr 23 13:14:37.399: INFO: Created: latency-svc-ltsrr Apr 23 13:14:37.416: INFO: Got endpoints: latency-svc-ltsrr [904.629516ms] Apr 23 13:14:37.464: INFO: Created: latency-svc-kl9jl Apr 23 13:14:37.478: INFO: Got endpoints: latency-svc-kl9jl [906.694968ms] Apr 23 13:14:37.519: INFO: Created: latency-svc-ngljp Apr 23 13:14:37.536: INFO: Got endpoints: latency-svc-ngljp [921.906338ms] Apr 23 13:14:37.561: INFO: Created: latency-svc-nqcnr Apr 23 13:14:37.637: INFO: Got endpoints: latency-svc-nqcnr [986.57944ms] Apr 23 13:14:37.643: INFO: Created: latency-svc-sljld Apr 23 13:14:37.650: INFO: Got endpoints: latency-svc-sljld [929.867392ms] Apr 23 13:14:37.670: INFO: Created: latency-svc-6pwkf Apr 23 13:14:37.694: INFO: Got endpoints: latency-svc-6pwkf [929.373151ms] Apr 23 13:14:37.724: INFO: Created: latency-svc-vsv2x Apr 23 13:14:37.778: INFO: Got endpoints: latency-svc-vsv2x [977.555549ms] Apr 23 13:14:37.779: INFO: Created: latency-svc-kzmdz Apr 23 13:14:37.783: INFO: Got endpoints: latency-svc-kzmdz [894.530371ms] Apr 23 13:14:37.807: INFO: Created: latency-svc-29896 Apr 23 13:14:37.820: INFO: Got endpoints: latency-svc-29896 [906.996882ms] Apr 23 13:14:37.837: INFO: Created: latency-svc-ljxnq Apr 23 13:14:37.863: INFO: Got endpoints: latency-svc-ljxnq [826.405213ms] Apr 23 13:14:37.910: INFO: Created: latency-svc-9mwt5 Apr 23 13:14:37.913: INFO: Got endpoints: latency-svc-9mwt5 [840.474232ms] Apr 23 13:14:37.935: INFO: Created: latency-svc-7m6z2 Apr 23 13:14:37.946: INFO: Got endpoints: latency-svc-7m6z2 [801.88957ms] Apr 23 13:14:37.969: INFO: Created: latency-svc-4lr69 Apr 23 13:14:37.983: INFO: Got endpoints: latency-svc-4lr69 [766.184309ms] Apr 23 13:14:38.005: INFO: Created: latency-svc-gntg6 Apr 23 13:14:38.060: INFO: Got endpoints: latency-svc-gntg6 [722.37509ms] Apr 23 13:14:38.086: INFO: Created: latency-svc-2gjph Apr 23 13:14:38.098: INFO: Got endpoints: latency-svc-2gjph [718.774352ms] Apr 23 13:14:38.120: INFO: Created: latency-svc-pw28r Apr 23 13:14:38.133: INFO: Got endpoints: latency-svc-pw28r [717.696492ms] Apr 23 13:14:38.155: INFO: Created: latency-svc-d6vt6 Apr 23 13:14:38.215: INFO: Got endpoints: latency-svc-d6vt6 [737.707248ms] Apr 23 13:14:38.219: INFO: Created: latency-svc-6dsxp Apr 23 13:14:38.230: INFO: Got endpoints: latency-svc-6dsxp [694.128362ms] Apr 23 13:14:38.258: INFO: Created: latency-svc-h9nb6 Apr 23 13:14:38.273: INFO: Got endpoints: latency-svc-h9nb6 [636.460108ms] Apr 23 13:14:38.294: INFO: Created: latency-svc-nk84s Apr 23 13:14:38.303: INFO: Got endpoints: latency-svc-nk84s [653.141126ms] Apr 23 13:14:38.347: INFO: Created: latency-svc-wd5j6 Apr 23 13:14:38.358: INFO: Got endpoints: latency-svc-wd5j6 [663.663841ms] Apr 23 13:14:38.389: INFO: Created: latency-svc-cmrls Apr 23 13:14:38.405: INFO: Got endpoints: latency-svc-cmrls [627.490425ms] Apr 23 13:14:38.427: INFO: Created: latency-svc-4khhj Apr 23 13:14:38.444: INFO: Got endpoints: latency-svc-4khhj [661.035856ms] Apr 23 13:14:38.498: INFO: Created: latency-svc-wfpxw Apr 23 13:14:38.514: INFO: Got endpoints: latency-svc-wfpxw [694.529111ms] Apr 23 13:14:38.545: INFO: Created: latency-svc-7vkc4 Apr 23 13:14:38.652: INFO: Got endpoints: latency-svc-7vkc4 [789.745225ms] Apr 23 13:14:38.655: INFO: Created: latency-svc-lc8qb Apr 23 13:14:38.658: INFO: Got endpoints: latency-svc-lc8qb [744.543599ms] Apr 23 13:14:38.690: INFO: Created: latency-svc-jvl6l Apr 23 13:14:38.701: INFO: Got endpoints: latency-svc-jvl6l [754.540616ms] Apr 23 13:14:38.731: INFO: Created: latency-svc-pchkn Apr 23 13:14:38.743: INFO: Got endpoints: latency-svc-pchkn [760.342338ms] Apr 23 13:14:38.791: INFO: Created: latency-svc-n2dlt Apr 23 13:14:38.794: INFO: Got endpoints: latency-svc-n2dlt [734.354852ms] Apr 23 13:14:38.834: INFO: Created: latency-svc-j54jr Apr 23 13:14:38.845: INFO: Got endpoints: latency-svc-j54jr [747.438898ms] Apr 23 13:14:38.870: INFO: Created: latency-svc-cxmpb Apr 23 13:14:38.882: INFO: Got endpoints: latency-svc-cxmpb [748.185452ms] Apr 23 13:14:38.928: INFO: Created: latency-svc-nhqf5 Apr 23 13:14:38.936: INFO: Got endpoints: latency-svc-nhqf5 [720.4673ms] Apr 23 13:14:38.971: INFO: Created: latency-svc-hrwvq Apr 23 13:14:38.997: INFO: Got endpoints: latency-svc-hrwvq [766.569844ms] Apr 23 13:14:39.019: INFO: Created: latency-svc-5gbg2 Apr 23 13:14:39.083: INFO: Got endpoints: latency-svc-5gbg2 [810.068048ms] Apr 23 13:14:39.085: INFO: Created: latency-svc-lkhzt Apr 23 13:14:39.093: INFO: Got endpoints: latency-svc-lkhzt [789.09275ms] Apr 23 13:14:39.122: INFO: Created: latency-svc-tdf5x Apr 23 13:14:39.135: INFO: Got endpoints: latency-svc-tdf5x [777.295401ms] Apr 23 13:14:39.163: INFO: Created: latency-svc-g5mb4 Apr 23 13:14:39.171: INFO: Got endpoints: latency-svc-g5mb4 [765.70556ms] Apr 23 13:14:39.229: INFO: Created: latency-svc-cf8tx Apr 23 13:14:39.231: INFO: Got endpoints: latency-svc-cf8tx [786.383333ms] Apr 23 13:14:39.303: INFO: Created: latency-svc-mgnk5 Apr 23 13:14:39.316: INFO: Got endpoints: latency-svc-mgnk5 [801.51702ms] Apr 23 13:14:39.374: INFO: Created: latency-svc-6pw4l Apr 23 13:14:39.419: INFO: Got endpoints: latency-svc-6pw4l [766.268851ms] Apr 23 13:14:39.439: INFO: Created: latency-svc-wnswx Apr 23 13:14:39.454: INFO: Got endpoints: latency-svc-wnswx [796.6407ms] Apr 23 13:14:39.498: INFO: Created: latency-svc-sqghx Apr 23 13:14:39.502: INFO: Got endpoints: latency-svc-sqghx [801.296174ms] Apr 23 13:14:39.524: INFO: Created: latency-svc-prlrc Apr 23 13:14:39.533: INFO: Got endpoints: latency-svc-prlrc [789.423086ms] Apr 23 13:14:39.555: INFO: Created: latency-svc-mt7lq Apr 23 13:14:39.563: INFO: Got endpoints: latency-svc-mt7lq [769.261813ms] Apr 23 13:14:39.589: INFO: Created: latency-svc-5bvll Apr 23 13:14:39.634: INFO: Got endpoints: latency-svc-5bvll [788.962221ms] Apr 23 13:14:39.655: INFO: Created: latency-svc-46gsz Apr 23 13:14:39.672: INFO: Got endpoints: latency-svc-46gsz [790.100292ms] Apr 23 13:14:39.692: INFO: Created: latency-svc-4hb9d Apr 23 13:14:39.708: INFO: Got endpoints: latency-svc-4hb9d [772.377578ms] Apr 23 13:14:39.735: INFO: Created: latency-svc-22kwq Apr 23 13:14:39.779: INFO: Got endpoints: latency-svc-22kwq [781.694763ms] Apr 23 13:14:39.799: INFO: Created: latency-svc-ctxxg Apr 23 13:14:39.811: INFO: Got endpoints: latency-svc-ctxxg [727.261818ms] Apr 23 13:14:39.835: INFO: Created: latency-svc-zptkc Apr 23 13:14:39.847: INFO: Got endpoints: latency-svc-zptkc [754.624761ms] Apr 23 13:14:39.878: INFO: Created: latency-svc-2sbsf Apr 23 13:14:39.928: INFO: Got endpoints: latency-svc-2sbsf [792.409265ms] Apr 23 13:14:39.930: INFO: Created: latency-svc-jrtdb Apr 23 13:14:39.950: INFO: Got endpoints: latency-svc-jrtdb [778.503714ms] Apr 23 13:14:39.985: INFO: Created: latency-svc-brjg6 Apr 23 13:14:39.998: INFO: Got endpoints: latency-svc-brjg6 [767.435952ms] Apr 23 13:14:40.021: INFO: Created: latency-svc-4s6m6 Apr 23 13:14:40.060: INFO: Got endpoints: latency-svc-4s6m6 [743.935443ms] Apr 23 13:14:40.076: INFO: Created: latency-svc-cn7mv Apr 23 13:14:40.088: INFO: Got endpoints: latency-svc-cn7mv [669.382244ms] Apr 23 13:14:40.118: INFO: Created: latency-svc-m2srz Apr 23 13:14:40.154: INFO: Got endpoints: latency-svc-m2srz [699.490331ms] Apr 23 13:14:40.213: INFO: Created: latency-svc-2qwgv Apr 23 13:14:40.230: INFO: Got endpoints: latency-svc-2qwgv [727.4935ms] Apr 23 13:14:40.261: INFO: Created: latency-svc-x2g87 Apr 23 13:14:40.272: INFO: Got endpoints: latency-svc-x2g87 [739.222098ms] Apr 23 13:14:40.292: INFO: Created: latency-svc-tvllp Apr 23 13:14:40.341: INFO: Got endpoints: latency-svc-tvllp [777.884222ms] Apr 23 13:14:40.358: INFO: Created: latency-svc-56pxt Apr 23 13:14:40.375: INFO: Got endpoints: latency-svc-56pxt [740.243936ms] Apr 23 13:14:40.394: INFO: Created: latency-svc-6c8f9 Apr 23 13:14:40.405: INFO: Got endpoints: latency-svc-6c8f9 [733.181759ms] Apr 23 13:14:40.429: INFO: Created: latency-svc-js78j Apr 23 13:14:40.490: INFO: Got endpoints: latency-svc-js78j [781.930502ms] Apr 23 13:14:40.493: INFO: Created: latency-svc-88d7n Apr 23 13:14:40.495: INFO: Got endpoints: latency-svc-88d7n [716.80633ms] Apr 23 13:14:40.526: INFO: Created: latency-svc-stj7w Apr 23 13:14:40.544: INFO: Got endpoints: latency-svc-stj7w [733.148484ms] Apr 23 13:14:40.580: INFO: Created: latency-svc-sfzr2 Apr 23 13:14:40.628: INFO: Got endpoints: latency-svc-sfzr2 [781.034004ms] Apr 23 13:14:40.651: INFO: Created: latency-svc-shj6d Apr 23 13:14:40.664: INFO: Got endpoints: latency-svc-shj6d [736.427796ms] Apr 23 13:14:40.687: INFO: Created: latency-svc-htdc4 Apr 23 13:14:40.702: INFO: Got endpoints: latency-svc-htdc4 [752.083797ms] Apr 23 13:14:40.755: INFO: Created: latency-svc-29kl7 Apr 23 13:14:40.757: INFO: Got endpoints: latency-svc-29kl7 [759.151099ms] Apr 23 13:14:40.784: INFO: Created: latency-svc-8z52z Apr 23 13:14:40.797: INFO: Got endpoints: latency-svc-8z52z [737.748571ms] Apr 23 13:14:40.819: INFO: Created: latency-svc-q7fld Apr 23 13:14:40.833: INFO: Got endpoints: latency-svc-q7fld [745.269074ms] Apr 23 13:14:40.880: INFO: Created: latency-svc-gkzgc Apr 23 13:14:40.883: INFO: Got endpoints: latency-svc-gkzgc [729.060283ms] Apr 23 13:14:40.909: INFO: Created: latency-svc-s46tv Apr 23 13:14:40.918: INFO: Got endpoints: latency-svc-s46tv [687.969294ms] Apr 23 13:14:40.961: INFO: Created: latency-svc-f9tw8 Apr 23 13:14:40.973: INFO: Got endpoints: latency-svc-f9tw8 [700.662826ms] Apr 23 13:14:41.018: INFO: Created: latency-svc-kg9vb Apr 23 13:14:41.021: INFO: Got endpoints: latency-svc-kg9vb [680.063668ms] Apr 23 13:14:41.065: INFO: Created: latency-svc-hzjxq Apr 23 13:14:41.075: INFO: Got endpoints: latency-svc-hzjxq [700.264893ms] Apr 23 13:14:41.103: INFO: Created: latency-svc-q5v7t Apr 23 13:14:41.149: INFO: Got endpoints: latency-svc-q5v7t [744.275969ms] Apr 23 13:14:41.151: INFO: Created: latency-svc-mpp4w Apr 23 13:14:41.166: INFO: Got endpoints: latency-svc-mpp4w [675.657583ms] Apr 23 13:14:41.196: INFO: Created: latency-svc-w5t8j Apr 23 13:14:41.214: INFO: Got endpoints: latency-svc-w5t8j [718.855361ms] Apr 23 13:14:41.287: INFO: Created: latency-svc-jkct8 Apr 23 13:14:41.291: INFO: Got endpoints: latency-svc-jkct8 [747.350781ms] Apr 23 13:14:41.348: INFO: Created: latency-svc-npxhm Apr 23 13:14:41.358: INFO: Got endpoints: latency-svc-npxhm [730.116047ms] Apr 23 13:14:41.419: INFO: Created: latency-svc-fnjtf Apr 23 13:14:41.424: INFO: Got endpoints: latency-svc-fnjtf [760.044269ms] Apr 23 13:14:41.455: INFO: Created: latency-svc-zbh48 Apr 23 13:14:41.467: INFO: Got endpoints: latency-svc-zbh48 [765.282153ms] Apr 23 13:14:41.497: INFO: Created: latency-svc-kjpfr Apr 23 13:14:41.519: INFO: Got endpoints: latency-svc-kjpfr [761.216336ms] Apr 23 13:14:41.581: INFO: Created: latency-svc-jqsh8 Apr 23 13:14:41.606: INFO: Got endpoints: latency-svc-jqsh8 [808.42721ms] Apr 23 13:14:41.607: INFO: Created: latency-svc-8kh9m Apr 23 13:14:41.619: INFO: Got endpoints: latency-svc-8kh9m [785.088941ms] Apr 23 13:14:41.653: INFO: Created: latency-svc-qd74w Apr 23 13:14:41.676: INFO: Got endpoints: latency-svc-qd74w [793.299545ms] Apr 23 13:14:41.737: INFO: Created: latency-svc-vnbnk Apr 23 13:14:41.762: INFO: Got endpoints: latency-svc-vnbnk [844.215773ms] Apr 23 13:14:41.762: INFO: Created: latency-svc-6g54t Apr 23 13:14:41.775: INFO: Got endpoints: latency-svc-6g54t [801.906144ms] Apr 23 13:14:41.798: INFO: Created: latency-svc-2qlnt Apr 23 13:14:41.812: INFO: Got endpoints: latency-svc-2qlnt [790.10584ms] Apr 23 13:14:41.826: INFO: Created: latency-svc-grjx8 Apr 23 13:14:41.880: INFO: Got endpoints: latency-svc-grjx8 [804.781968ms] Apr 23 13:14:41.898: INFO: Created: latency-svc-glrjw Apr 23 13:14:41.908: INFO: Got endpoints: latency-svc-glrjw [758.095933ms] Apr 23 13:14:41.936: INFO: Created: latency-svc-qgzgr Apr 23 13:14:41.950: INFO: Got endpoints: latency-svc-qgzgr [783.92219ms] Apr 23 13:14:41.972: INFO: Created: latency-svc-z2wm5 Apr 23 13:14:42.030: INFO: Got endpoints: latency-svc-z2wm5 [815.474139ms] Apr 23 13:14:42.031: INFO: Created: latency-svc-wb4n4 Apr 23 13:14:42.054: INFO: Got endpoints: latency-svc-wb4n4 [763.137857ms] Apr 23 13:14:42.090: INFO: Created: latency-svc-s4kf2 Apr 23 13:14:42.101: INFO: Got endpoints: latency-svc-s4kf2 [742.540287ms] Apr 23 13:14:42.120: INFO: Created: latency-svc-96k7g Apr 23 13:14:42.180: INFO: Got endpoints: latency-svc-96k7g [755.460169ms] Apr 23 13:14:42.182: INFO: Created: latency-svc-rqkv9 Apr 23 13:14:42.186: INFO: Got endpoints: latency-svc-rqkv9 [718.556232ms] Apr 23 13:14:42.276: INFO: Created: latency-svc-s7jhs Apr 23 13:14:42.323: INFO: Got endpoints: latency-svc-s7jhs [804.405672ms] Apr 23 13:14:42.337: INFO: Created: latency-svc-9chfh Apr 23 13:14:42.350: INFO: Got endpoints: latency-svc-9chfh [743.527925ms] Apr 23 13:14:42.380: INFO: Created: latency-svc-ctg5n Apr 23 13:14:42.396: INFO: Got endpoints: latency-svc-ctg5n [777.396764ms] Apr 23 13:14:42.416: INFO: Created: latency-svc-nbgxf Apr 23 13:14:42.449: INFO: Got endpoints: latency-svc-nbgxf [772.789724ms] Apr 23 13:14:42.462: INFO: Created: latency-svc-fn96t Apr 23 13:14:42.474: INFO: Got endpoints: latency-svc-fn96t [712.191985ms] Apr 23 13:14:42.500: INFO: Created: latency-svc-mb6t5 Apr 23 13:14:42.511: INFO: Got endpoints: latency-svc-mb6t5 [736.196762ms] Apr 23 13:14:42.535: INFO: Created: latency-svc-tzr5c Apr 23 13:14:42.548: INFO: Got endpoints: latency-svc-tzr5c [736.179966ms] Apr 23 13:14:42.602: INFO: Created: latency-svc-68v5s Apr 23 13:14:42.650: INFO: Got endpoints: latency-svc-68v5s [770.275971ms] Apr 23 13:14:42.674: INFO: Created: latency-svc-sjmr7 Apr 23 13:14:42.736: INFO: Got endpoints: latency-svc-sjmr7 [828.387425ms] Apr 23 13:14:42.738: INFO: Created: latency-svc-sgjlh Apr 23 13:14:42.740: INFO: Got endpoints: latency-svc-sgjlh [789.64726ms] Apr 23 13:14:42.769: INFO: Created: latency-svc-62b48 Apr 23 13:14:42.782: INFO: Got endpoints: latency-svc-62b48 [752.474954ms] Apr 23 13:14:42.806: INFO: Created: latency-svc-4ptfc Apr 23 13:14:42.819: INFO: Got endpoints: latency-svc-4ptfc [764.459622ms] Apr 23 13:14:42.874: INFO: Created: latency-svc-fw2jc Apr 23 13:14:42.877: INFO: Got endpoints: latency-svc-fw2jc [775.646928ms] Apr 23 13:14:42.877: INFO: Latencies: [85.640826ms 102.068168ms 144.385099ms 211.136453ms 240.663075ms 282.8179ms 367.23828ms 409.403006ms 451.73315ms 542.852074ms 590.285909ms 627.490425ms 636.460108ms 653.141126ms 661.035856ms 663.663841ms 669.382244ms 675.657583ms 677.181964ms 680.063668ms 680.667073ms 687.969294ms 689.129721ms 694.128362ms 694.529111ms 699.490331ms 700.149461ms 700.264893ms 700.662826ms 705.925243ms 706.069229ms 706.970264ms 708.905018ms 710.591359ms 712.191985ms 716.80633ms 717.696492ms 718.556232ms 718.774352ms 718.855361ms 719.514679ms 720.434375ms 720.4673ms 722.37509ms 727.08002ms 727.261818ms 727.4935ms 728.349203ms 728.935698ms 729.060283ms 729.232609ms 730.116047ms 731.009831ms 733.148484ms 733.181759ms 734.354852ms 734.401451ms 735.478123ms 736.179966ms 736.196762ms 736.427796ms 736.967397ms 737.707248ms 737.748571ms 739.222098ms 739.890593ms 740.243936ms 742.243742ms 742.540287ms 743.527925ms 743.935443ms 744.275969ms 744.543599ms 745.269074ms 747.350781ms 747.383171ms 747.438898ms 748.185452ms 749.712598ms 749.968312ms 752.083797ms 752.474954ms 752.847769ms 753.002265ms 754.540616ms 754.624761ms 755.460169ms 758.095933ms 759.151099ms 760.044269ms 760.342338ms 761.028592ms 761.216336ms 761.39909ms 761.493268ms 763.137857ms 764.459622ms 765.000196ms 765.282153ms 765.70556ms 766.184309ms 766.268851ms 766.569844ms 767.435952ms 769.261813ms 769.394225ms 770.275971ms 772.377578ms 772.789724ms 775.646928ms 776.993897ms 777.295401ms 777.370936ms 777.396764ms 777.884222ms 778.25135ms 778.503714ms 779.908872ms 781.034004ms 781.694763ms 781.930502ms 782.543132ms 782.699351ms 783.92219ms 785.088941ms 786.383333ms 788.962221ms 789.068181ms 789.09275ms 789.423086ms 789.64726ms 789.745225ms 790.100292ms 790.10584ms 790.891094ms 791.581491ms 791.921399ms 792.409265ms 793.299545ms 794.531767ms 796.267126ms 796.6407ms 799.23678ms 800.494264ms 800.969724ms 801.296174ms 801.374848ms 801.51702ms 801.536168ms 801.88957ms 801.906144ms 802.021643ms 802.59396ms 803.21293ms 803.724174ms 804.405672ms 804.781968ms 807.62616ms 807.635145ms 808.028476ms 808.42721ms 808.791407ms 809.660253ms 810.068048ms 813.936497ms 815.474139ms 816.936723ms 817.872656ms 818.409527ms 818.948153ms 819.984735ms 825.337426ms 826.405213ms 828.387425ms 829.210171ms 831.546802ms 833.015117ms 837.49776ms 839.778453ms 840.474232ms 841.604014ms 843.416533ms 843.659031ms 843.761985ms 844.215773ms 862.303269ms 866.965879ms 871.316643ms 894.530371ms 904.629516ms 906.694968ms 906.777195ms 906.996882ms 921.906338ms 923.038774ms 923.971125ms 929.373151ms 929.867392ms 977.555549ms 986.57944ms] Apr 23 13:14:42.877: INFO: 50 %ile: 766.184309ms Apr 23 13:14:42.877: INFO: 90 %ile: 841.604014ms Apr 23 13:14:42.877: INFO: 99 %ile: 977.555549ms Apr 23 13:14:42.877: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:14:42.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-151" for this suite. Apr 23 13:15:04.887: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:15:04.977: INFO: namespace svc-latency-151 deletion completed in 22.09756187s • [SLOW TEST:36.707 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:15:04.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 23 13:15:05.042: INFO: Waiting up to 5m0s for pod "pod-ef7539be-e1f3-412a-84da-9acf8a625842" in namespace "emptydir-9548" to be "success or failure" Apr 23 13:15:05.045: INFO: Pod "pod-ef7539be-e1f3-412a-84da-9acf8a625842": Phase="Pending", Reason="", readiness=false. Elapsed: 3.497659ms Apr 23 13:15:07.049: INFO: Pod "pod-ef7539be-e1f3-412a-84da-9acf8a625842": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007420961s Apr 23 13:15:09.052: INFO: Pod "pod-ef7539be-e1f3-412a-84da-9acf8a625842": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010822724s STEP: Saw pod success Apr 23 13:15:09.052: INFO: Pod "pod-ef7539be-e1f3-412a-84da-9acf8a625842" satisfied condition "success or failure" Apr 23 13:15:09.055: INFO: Trying to get logs from node iruya-worker2 pod pod-ef7539be-e1f3-412a-84da-9acf8a625842 container test-container: STEP: delete the pod Apr 23 13:15:09.123: INFO: Waiting for pod pod-ef7539be-e1f3-412a-84da-9acf8a625842 to disappear Apr 23 13:15:09.135: INFO: Pod pod-ef7539be-e1f3-412a-84da-9acf8a625842 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:15:09.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9548" for this suite. Apr 23 13:15:15.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:15:15.229: INFO: namespace emptydir-9548 deletion completed in 6.091686898s • [SLOW TEST:10.252 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:15:15.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-5597 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-5597 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5597 Apr 23 13:15:15.302: INFO: Found 0 stateful pods, waiting for 1 Apr 23 13:15:25.307: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 23 13:15:25.311: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5597 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 23 13:15:25.549: INFO: stderr: "I0423 13:15:25.446008 966 log.go:172] (0xc00012cdc0) (0xc00036e6e0) Create stream\nI0423 13:15:25.446075 966 log.go:172] (0xc00012cdc0) (0xc00036e6e0) Stream added, broadcasting: 1\nI0423 13:15:25.456739 966 log.go:172] (0xc00012cdc0) Reply frame received for 1\nI0423 13:15:25.456774 966 log.go:172] (0xc00012cdc0) (0xc0005a23c0) Create stream\nI0423 13:15:25.456782 966 log.go:172] (0xc00012cdc0) (0xc0005a23c0) Stream added, broadcasting: 3\nI0423 13:15:25.457950 966 log.go:172] (0xc00012cdc0) Reply frame received for 3\nI0423 13:15:25.457980 966 log.go:172] (0xc00012cdc0) (0xc000870000) Create stream\nI0423 13:15:25.457989 966 log.go:172] (0xc00012cdc0) (0xc000870000) Stream added, broadcasting: 5\nI0423 13:15:25.458872 966 log.go:172] (0xc00012cdc0) Reply frame received for 5\nI0423 13:15:25.512239 966 log.go:172] (0xc00012cdc0) Data frame received for 5\nI0423 13:15:25.512269 966 log.go:172] (0xc000870000) (5) Data frame handling\nI0423 13:15:25.512285 966 log.go:172] (0xc000870000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0423 13:15:25.542050 966 log.go:172] (0xc00012cdc0) Data frame received for 3\nI0423 13:15:25.542082 966 log.go:172] (0xc0005a23c0) (3) Data frame handling\nI0423 13:15:25.542110 966 log.go:172] (0xc0005a23c0) (3) Data frame sent\nI0423 13:15:25.542387 966 log.go:172] (0xc00012cdc0) Data frame received for 5\nI0423 13:15:25.542423 966 log.go:172] (0xc000870000) (5) Data frame handling\nI0423 13:15:25.542587 966 log.go:172] (0xc00012cdc0) Data frame received for 3\nI0423 13:15:25.542601 966 log.go:172] (0xc0005a23c0) (3) Data frame handling\nI0423 13:15:25.544572 966 log.go:172] (0xc00012cdc0) Data frame received for 1\nI0423 13:15:25.544682 966 log.go:172] (0xc00036e6e0) (1) Data frame handling\nI0423 13:15:25.544775 966 log.go:172] (0xc00036e6e0) (1) Data frame sent\nI0423 13:15:25.544820 966 log.go:172] (0xc00012cdc0) (0xc00036e6e0) Stream removed, broadcasting: 1\nI0423 13:15:25.544875 966 log.go:172] (0xc00012cdc0) Go away received\nI0423 13:15:25.545224 966 log.go:172] (0xc00012cdc0) (0xc00036e6e0) Stream removed, broadcasting: 1\nI0423 13:15:25.545244 966 log.go:172] (0xc00012cdc0) (0xc0005a23c0) Stream removed, broadcasting: 3\nI0423 13:15:25.545254 966 log.go:172] (0xc00012cdc0) (0xc000870000) Stream removed, broadcasting: 5\n" Apr 23 13:15:25.550: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 23 13:15:25.550: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 23 13:15:25.553: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 23 13:15:35.558: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 23 13:15:35.558: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 13:15:35.579: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999392s Apr 23 13:15:36.583: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990047187s Apr 23 13:15:37.588: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.985827263s Apr 23 13:15:38.593: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.980953752s Apr 23 13:15:39.598: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.975863102s Apr 23 13:15:40.601: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.971028782s Apr 23 13:15:41.606: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.967296022s Apr 23 13:15:42.611: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.962667627s Apr 23 13:15:43.615: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.957639554s Apr 23 13:15:44.620: INFO: Verifying statefulset ss doesn't scale past 1 for another 953.470781ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5597 Apr 23 13:15:45.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 13:15:45.831: INFO: stderr: "I0423 13:15:45.756308 986 log.go:172] (0xc00098c0b0) (0xc000900140) Create stream\nI0423 13:15:45.756366 986 log.go:172] (0xc00098c0b0) (0xc000900140) Stream added, broadcasting: 1\nI0423 13:15:45.758420 986 log.go:172] (0xc00098c0b0) Reply frame received for 1\nI0423 13:15:45.758466 986 log.go:172] (0xc00098c0b0) (0xc0004ce1e0) Create stream\nI0423 13:15:45.758480 986 log.go:172] (0xc00098c0b0) (0xc0004ce1e0) Stream added, broadcasting: 3\nI0423 13:15:45.759188 986 log.go:172] (0xc00098c0b0) Reply frame received for 3\nI0423 13:15:45.759210 986 log.go:172] (0xc00098c0b0) (0xc0004ce280) Create stream\nI0423 13:15:45.759218 986 log.go:172] (0xc00098c0b0) (0xc0004ce280) Stream added, broadcasting: 5\nI0423 13:15:45.760134 986 log.go:172] (0xc00098c0b0) Reply frame received for 5\nI0423 13:15:45.823844 986 log.go:172] (0xc00098c0b0) Data frame received for 5\nI0423 13:15:45.823877 986 log.go:172] (0xc0004ce280) (5) Data frame handling\nI0423 13:15:45.823886 986 log.go:172] (0xc0004ce280) (5) Data frame sent\nI0423 13:15:45.823893 986 log.go:172] (0xc00098c0b0) Data frame received for 5\nI0423 13:15:45.823898 986 log.go:172] (0xc0004ce280) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0423 13:15:45.823913 986 log.go:172] (0xc00098c0b0) Data frame received for 3\nI0423 13:15:45.823924 986 log.go:172] (0xc0004ce1e0) (3) Data frame handling\nI0423 13:15:45.823937 986 log.go:172] (0xc0004ce1e0) (3) Data frame sent\nI0423 13:15:45.823950 986 log.go:172] (0xc00098c0b0) Data frame received for 3\nI0423 13:15:45.823956 986 log.go:172] (0xc0004ce1e0) (3) Data frame handling\nI0423 13:15:45.825594 986 log.go:172] (0xc00098c0b0) Data frame received for 1\nI0423 13:15:45.825628 986 log.go:172] (0xc000900140) (1) Data frame handling\nI0423 13:15:45.825643 986 log.go:172] (0xc000900140) (1) Data frame sent\nI0423 13:15:45.825662 986 log.go:172] (0xc00098c0b0) (0xc000900140) Stream removed, broadcasting: 1\nI0423 13:15:45.825753 986 log.go:172] (0xc00098c0b0) Go away received\nI0423 13:15:45.826021 986 log.go:172] (0xc00098c0b0) (0xc000900140) Stream removed, broadcasting: 1\nI0423 13:15:45.826037 986 log.go:172] (0xc00098c0b0) (0xc0004ce1e0) Stream removed, broadcasting: 3\nI0423 13:15:45.826045 986 log.go:172] (0xc00098c0b0) (0xc0004ce280) Stream removed, broadcasting: 5\n" Apr 23 13:15:45.831: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 23 13:15:45.831: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 23 13:15:45.834: INFO: Found 1 stateful pods, waiting for 3 Apr 23 13:15:55.840: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 23 13:15:55.840: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 23 13:15:55.840: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 23 13:15:55.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5597 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 23 13:15:56.081: INFO: stderr: "I0423 13:15:55.976321 1005 log.go:172] (0xc0009b6630) (0xc000614aa0) Create stream\nI0423 13:15:55.976376 1005 log.go:172] (0xc0009b6630) (0xc000614aa0) Stream added, broadcasting: 1\nI0423 13:15:55.980103 1005 log.go:172] (0xc0009b6630) Reply frame received for 1\nI0423 13:15:55.980163 1005 log.go:172] (0xc0009b6630) (0xc0006141e0) Create stream\nI0423 13:15:55.980175 1005 log.go:172] (0xc0009b6630) (0xc0006141e0) Stream added, broadcasting: 3\nI0423 13:15:55.981362 1005 log.go:172] (0xc0009b6630) Reply frame received for 3\nI0423 13:15:55.981411 1005 log.go:172] (0xc0009b6630) (0xc00023e000) Create stream\nI0423 13:15:55.981427 1005 log.go:172] (0xc0009b6630) (0xc00023e000) Stream added, broadcasting: 5\nI0423 13:15:55.982271 1005 log.go:172] (0xc0009b6630) Reply frame received for 5\nI0423 13:15:56.074590 1005 log.go:172] (0xc0009b6630) Data frame received for 5\nI0423 13:15:56.074646 1005 log.go:172] (0xc00023e000) (5) Data frame handling\nI0423 13:15:56.074673 1005 log.go:172] (0xc00023e000) (5) Data frame sent\nI0423 13:15:56.074692 1005 log.go:172] (0xc0009b6630) Data frame received for 5\nI0423 13:15:56.074709 1005 log.go:172] (0xc00023e000) (5) Data frame handling\nI0423 13:15:56.074734 1005 log.go:172] (0xc0009b6630) Data frame received for 3\nI0423 13:15:56.074763 1005 log.go:172] (0xc0006141e0) (3) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0423 13:15:56.074798 1005 log.go:172] (0xc0006141e0) (3) Data frame sent\nI0423 13:15:56.074826 1005 log.go:172] (0xc0009b6630) Data frame received for 3\nI0423 13:15:56.074852 1005 log.go:172] (0xc0006141e0) (3) Data frame handling\nI0423 13:15:56.076070 1005 log.go:172] (0xc0009b6630) Data frame received for 1\nI0423 13:15:56.076168 1005 log.go:172] (0xc000614aa0) (1) Data frame handling\nI0423 13:15:56.076200 1005 log.go:172] (0xc000614aa0) (1) Data frame sent\nI0423 13:15:56.076256 1005 log.go:172] (0xc0009b6630) (0xc000614aa0) Stream removed, broadcasting: 1\nI0423 13:15:56.076307 1005 log.go:172] (0xc0009b6630) Go away received\nI0423 13:15:56.076630 1005 log.go:172] (0xc0009b6630) (0xc000614aa0) Stream removed, broadcasting: 1\nI0423 13:15:56.076649 1005 log.go:172] (0xc0009b6630) (0xc0006141e0) Stream removed, broadcasting: 3\nI0423 13:15:56.076656 1005 log.go:172] (0xc0009b6630) (0xc00023e000) Stream removed, broadcasting: 5\n" Apr 23 13:15:56.082: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 23 13:15:56.082: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 23 13:15:56.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5597 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 23 13:15:56.321: INFO: stderr: "I0423 13:15:56.215621 1025 log.go:172] (0xc000a98420) (0xc0004206e0) Create stream\nI0423 13:15:56.215697 1025 log.go:172] (0xc000a98420) (0xc0004206e0) Stream added, broadcasting: 1\nI0423 13:15:56.220171 1025 log.go:172] (0xc000a98420) Reply frame received for 1\nI0423 13:15:56.220207 1025 log.go:172] (0xc000a98420) (0xc000420000) Create stream\nI0423 13:15:56.220220 1025 log.go:172] (0xc000a98420) (0xc000420000) Stream added, broadcasting: 3\nI0423 13:15:56.221451 1025 log.go:172] (0xc000a98420) Reply frame received for 3\nI0423 13:15:56.221497 1025 log.go:172] (0xc000a98420) (0xc000678280) Create stream\nI0423 13:15:56.221512 1025 log.go:172] (0xc000a98420) (0xc000678280) Stream added, broadcasting: 5\nI0423 13:15:56.222363 1025 log.go:172] (0xc000a98420) Reply frame received for 5\nI0423 13:15:56.286671 1025 log.go:172] (0xc000a98420) Data frame received for 5\nI0423 13:15:56.286697 1025 log.go:172] (0xc000678280) (5) Data frame handling\nI0423 13:15:56.286718 1025 log.go:172] (0xc000678280) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0423 13:15:56.314026 1025 log.go:172] (0xc000a98420) Data frame received for 3\nI0423 13:15:56.314053 1025 log.go:172] (0xc000420000) (3) Data frame handling\nI0423 13:15:56.314072 1025 log.go:172] (0xc000420000) (3) Data frame sent\nI0423 13:15:56.314328 1025 log.go:172] (0xc000a98420) Data frame received for 5\nI0423 13:15:56.314353 1025 log.go:172] (0xc000678280) (5) Data frame handling\nI0423 13:15:56.314377 1025 log.go:172] (0xc000a98420) Data frame received for 3\nI0423 13:15:56.314419 1025 log.go:172] (0xc000420000) (3) Data frame handling\nI0423 13:15:56.316245 1025 log.go:172] (0xc000a98420) Data frame received for 1\nI0423 13:15:56.316264 1025 log.go:172] (0xc0004206e0) (1) Data frame handling\nI0423 13:15:56.316273 1025 log.go:172] (0xc0004206e0) (1) Data frame sent\nI0423 13:15:56.316409 1025 log.go:172] (0xc000a98420) (0xc0004206e0) Stream removed, broadcasting: 1\nI0423 13:15:56.316454 1025 log.go:172] (0xc000a98420) Go away received\nI0423 13:15:56.316845 1025 log.go:172] (0xc000a98420) (0xc0004206e0) Stream removed, broadcasting: 1\nI0423 13:15:56.316867 1025 log.go:172] (0xc000a98420) (0xc000420000) Stream removed, broadcasting: 3\nI0423 13:15:56.316876 1025 log.go:172] (0xc000a98420) (0xc000678280) Stream removed, broadcasting: 5\n" Apr 23 13:15:56.322: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 23 13:15:56.322: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 23 13:15:56.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5597 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 23 13:15:56.829: INFO: stderr: "I0423 13:15:56.459283 1045 log.go:172] (0xc00012d130) (0xc000660b40) Create stream\nI0423 13:15:56.459350 1045 log.go:172] (0xc00012d130) (0xc000660b40) Stream added, broadcasting: 1\nI0423 13:15:56.462056 1045 log.go:172] (0xc00012d130) Reply frame received for 1\nI0423 13:15:56.462125 1045 log.go:172] (0xc00012d130) (0xc0006a6000) Create stream\nI0423 13:15:56.462146 1045 log.go:172] (0xc00012d130) (0xc0006a6000) Stream added, broadcasting: 3\nI0423 13:15:56.463145 1045 log.go:172] (0xc00012d130) Reply frame received for 3\nI0423 13:15:56.463178 1045 log.go:172] (0xc00012d130) (0xc000660be0) Create stream\nI0423 13:15:56.463189 1045 log.go:172] (0xc00012d130) (0xc000660be0) Stream added, broadcasting: 5\nI0423 13:15:56.464231 1045 log.go:172] (0xc00012d130) Reply frame received for 5\nI0423 13:15:56.537713 1045 log.go:172] (0xc00012d130) Data frame received for 5\nI0423 13:15:56.537746 1045 log.go:172] (0xc000660be0) (5) Data frame handling\nI0423 13:15:56.537767 1045 log.go:172] (0xc000660be0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0423 13:15:56.823549 1045 log.go:172] (0xc00012d130) Data frame received for 3\nI0423 13:15:56.823580 1045 log.go:172] (0xc0006a6000) (3) Data frame handling\nI0423 13:15:56.823610 1045 log.go:172] (0xc0006a6000) (3) Data frame sent\nI0423 13:15:56.824263 1045 log.go:172] (0xc00012d130) Data frame received for 5\nI0423 13:15:56.824277 1045 log.go:172] (0xc000660be0) (5) Data frame handling\nI0423 13:15:56.824307 1045 log.go:172] (0xc00012d130) Data frame received for 3\nI0423 13:15:56.824318 1045 log.go:172] (0xc0006a6000) (3) Data frame handling\nI0423 13:15:56.825387 1045 log.go:172] (0xc00012d130) Data frame received for 1\nI0423 13:15:56.825422 1045 log.go:172] (0xc000660b40) (1) Data frame handling\nI0423 13:15:56.825450 1045 log.go:172] (0xc000660b40) (1) Data frame sent\nI0423 13:15:56.825484 1045 log.go:172] (0xc00012d130) (0xc000660b40) Stream removed, broadcasting: 1\nI0423 13:15:56.825645 1045 log.go:172] (0xc00012d130) Go away received\nI0423 13:15:56.825757 1045 log.go:172] (0xc00012d130) (0xc000660b40) Stream removed, broadcasting: 1\nI0423 13:15:56.825769 1045 log.go:172] (0xc00012d130) (0xc0006a6000) Stream removed, broadcasting: 3\nI0423 13:15:56.825777 1045 log.go:172] (0xc00012d130) (0xc000660be0) Stream removed, broadcasting: 5\n" Apr 23 13:15:56.829: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 23 13:15:56.829: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 23 13:15:56.829: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 13:15:56.832: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 23 13:16:06.840: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 23 13:16:06.840: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 23 13:16:06.840: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 23 13:16:06.855: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999461s Apr 23 13:16:07.860: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991759807s Apr 23 13:16:08.865: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986601492s Apr 23 13:16:09.870: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.981644299s Apr 23 13:16:10.877: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.97681959s Apr 23 13:16:11.882: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970351956s Apr 23 13:16:12.887: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.965035753s Apr 23 13:16:13.893: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.95973958s Apr 23 13:16:14.898: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.954241194s Apr 23 13:16:15.903: INFO: Verifying statefulset ss doesn't scale past 3 for another 948.558498ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5597 Apr 23 13:16:16.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 13:16:17.165: INFO: stderr: "I0423 13:16:17.048913 1067 log.go:172] (0xc00097a370) (0xc000a92780) Create stream\nI0423 13:16:17.048985 1067 log.go:172] (0xc00097a370) (0xc000a92780) Stream added, broadcasting: 1\nI0423 13:16:17.062984 1067 log.go:172] (0xc00097a370) Reply frame received for 1\nI0423 13:16:17.063030 1067 log.go:172] (0xc00097a370) (0xc000a92820) Create stream\nI0423 13:16:17.063039 1067 log.go:172] (0xc00097a370) (0xc000a92820) Stream added, broadcasting: 3\nI0423 13:16:17.065453 1067 log.go:172] (0xc00097a370) Reply frame received for 3\nI0423 13:16:17.065487 1067 log.go:172] (0xc00097a370) (0xc0008b4000) Create stream\nI0423 13:16:17.065512 1067 log.go:172] (0xc00097a370) (0xc0008b4000) Stream added, broadcasting: 5\nI0423 13:16:17.066221 1067 log.go:172] (0xc00097a370) Reply frame received for 5\nI0423 13:16:17.159282 1067 log.go:172] (0xc00097a370) Data frame received for 3\nI0423 13:16:17.159344 1067 log.go:172] (0xc000a92820) (3) Data frame handling\nI0423 13:16:17.159381 1067 log.go:172] (0xc000a92820) (3) Data frame sent\nI0423 13:16:17.159400 1067 log.go:172] (0xc00097a370) Data frame received for 3\nI0423 13:16:17.159414 1067 log.go:172] (0xc000a92820) (3) Data frame handling\nI0423 13:16:17.159434 1067 log.go:172] (0xc00097a370) Data frame received for 5\nI0423 13:16:17.159450 1067 log.go:172] (0xc0008b4000) (5) Data frame handling\nI0423 13:16:17.159475 1067 log.go:172] (0xc0008b4000) (5) Data frame sent\nI0423 13:16:17.159513 1067 log.go:172] (0xc00097a370) Data frame received for 5\nI0423 13:16:17.159552 1067 log.go:172] (0xc0008b4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0423 13:16:17.160760 1067 log.go:172] (0xc00097a370) Data frame received for 1\nI0423 13:16:17.160777 1067 log.go:172] (0xc000a92780) (1) Data frame handling\nI0423 13:16:17.160785 1067 log.go:172] (0xc000a92780) (1) Data frame sent\nI0423 13:16:17.160794 1067 log.go:172] (0xc00097a370) (0xc000a92780) Stream removed, broadcasting: 1\nI0423 13:16:17.160826 1067 log.go:172] (0xc00097a370) Go away received\nI0423 13:16:17.161241 1067 log.go:172] (0xc00097a370) (0xc000a92780) Stream removed, broadcasting: 1\nI0423 13:16:17.161261 1067 log.go:172] (0xc00097a370) (0xc000a92820) Stream removed, broadcasting: 3\nI0423 13:16:17.161267 1067 log.go:172] (0xc00097a370) (0xc0008b4000) Stream removed, broadcasting: 5\n" Apr 23 13:16:17.165: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 23 13:16:17.165: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 23 13:16:17.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5597 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 13:16:17.396: INFO: stderr: "I0423 13:16:17.299548 1087 log.go:172] (0xc0008dc420) (0xc00035a820) Create stream\nI0423 13:16:17.299607 1087 log.go:172] (0xc0008dc420) (0xc00035a820) Stream added, broadcasting: 1\nI0423 13:16:17.301631 1087 log.go:172] (0xc0008dc420) Reply frame received for 1\nI0423 13:16:17.301684 1087 log.go:172] (0xc0008dc420) (0xc00079a000) Create stream\nI0423 13:16:17.301710 1087 log.go:172] (0xc0008dc420) (0xc00079a000) Stream added, broadcasting: 3\nI0423 13:16:17.302614 1087 log.go:172] (0xc0008dc420) Reply frame received for 3\nI0423 13:16:17.302653 1087 log.go:172] (0xc0008dc420) (0xc00035a8c0) Create stream\nI0423 13:16:17.302674 1087 log.go:172] (0xc0008dc420) (0xc00035a8c0) Stream added, broadcasting: 5\nI0423 13:16:17.303479 1087 log.go:172] (0xc0008dc420) Reply frame received for 5\nI0423 13:16:17.389496 1087 log.go:172] (0xc0008dc420) Data frame received for 5\nI0423 13:16:17.389524 1087 log.go:172] (0xc00035a8c0) (5) Data frame handling\nI0423 13:16:17.389531 1087 log.go:172] (0xc00035a8c0) (5) Data frame sent\nI0423 13:16:17.389537 1087 log.go:172] (0xc0008dc420) Data frame received for 5\nI0423 13:16:17.389541 1087 log.go:172] (0xc00035a8c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0423 13:16:17.389570 1087 log.go:172] (0xc0008dc420) Data frame received for 3\nI0423 13:16:17.389605 1087 log.go:172] (0xc00079a000) (3) Data frame handling\nI0423 13:16:17.389625 1087 log.go:172] (0xc00079a000) (3) Data frame sent\nI0423 13:16:17.389647 1087 log.go:172] (0xc0008dc420) Data frame received for 3\nI0423 13:16:17.389662 1087 log.go:172] (0xc00079a000) (3) Data frame handling\nI0423 13:16:17.391635 1087 log.go:172] (0xc0008dc420) Data frame received for 1\nI0423 13:16:17.391658 1087 log.go:172] (0xc00035a820) (1) Data frame handling\nI0423 13:16:17.391673 1087 log.go:172] (0xc00035a820) (1) Data frame sent\nI0423 13:16:17.391796 1087 log.go:172] (0xc0008dc420) (0xc00035a820) Stream removed, broadcasting: 1\nI0423 13:16:17.391860 1087 log.go:172] (0xc0008dc420) Go away received\nI0423 13:16:17.392091 1087 log.go:172] (0xc0008dc420) (0xc00035a820) Stream removed, broadcasting: 1\nI0423 13:16:17.392113 1087 log.go:172] (0xc0008dc420) (0xc00079a000) Stream removed, broadcasting: 3\nI0423 13:16:17.392125 1087 log.go:172] (0xc0008dc420) (0xc00035a8c0) Stream removed, broadcasting: 5\n" Apr 23 13:16:17.396: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 23 13:16:17.396: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 23 13:16:17.396: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5597 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 13:16:17.594: INFO: stderr: "I0423 13:16:17.519675 1108 log.go:172] (0xc000116fd0) (0xc0005bcaa0) Create stream\nI0423 13:16:17.519728 1108 log.go:172] (0xc000116fd0) (0xc0005bcaa0) Stream added, broadcasting: 1\nI0423 13:16:17.522981 1108 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0423 13:16:17.523031 1108 log.go:172] (0xc000116fd0) (0xc0005bc1e0) Create stream\nI0423 13:16:17.523042 1108 log.go:172] (0xc000116fd0) (0xc0005bc1e0) Stream added, broadcasting: 3\nI0423 13:16:17.523944 1108 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0423 13:16:17.523991 1108 log.go:172] (0xc000116fd0) (0xc0000d4000) Create stream\nI0423 13:16:17.524000 1108 log.go:172] (0xc000116fd0) (0xc0000d4000) Stream added, broadcasting: 5\nI0423 13:16:17.524654 1108 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0423 13:16:17.587430 1108 log.go:172] (0xc000116fd0) Data frame received for 5\nI0423 13:16:17.587478 1108 log.go:172] (0xc0000d4000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0423 13:16:17.587524 1108 log.go:172] (0xc000116fd0) Data frame received for 3\nI0423 13:16:17.587560 1108 log.go:172] (0xc0005bc1e0) (3) Data frame handling\nI0423 13:16:17.587582 1108 log.go:172] (0xc0005bc1e0) (3) Data frame sent\nI0423 13:16:17.587603 1108 log.go:172] (0xc000116fd0) Data frame received for 3\nI0423 13:16:17.587619 1108 log.go:172] (0xc0005bc1e0) (3) Data frame handling\nI0423 13:16:17.587642 1108 log.go:172] (0xc0000d4000) (5) Data frame sent\nI0423 13:16:17.587660 1108 log.go:172] (0xc000116fd0) Data frame received for 5\nI0423 13:16:17.587673 1108 log.go:172] (0xc0000d4000) (5) Data frame handling\nI0423 13:16:17.589416 1108 log.go:172] (0xc000116fd0) Data frame received for 1\nI0423 13:16:17.589449 1108 log.go:172] (0xc0005bcaa0) (1) Data frame handling\nI0423 13:16:17.589472 1108 log.go:172] (0xc0005bcaa0) (1) Data frame sent\nI0423 13:16:17.589497 1108 log.go:172] (0xc000116fd0) (0xc0005bcaa0) Stream removed, broadcasting: 1\nI0423 13:16:17.589524 1108 log.go:172] (0xc000116fd0) Go away received\nI0423 13:16:17.589843 1108 log.go:172] (0xc000116fd0) (0xc0005bcaa0) Stream removed, broadcasting: 1\nI0423 13:16:17.589865 1108 log.go:172] (0xc000116fd0) (0xc0005bc1e0) Stream removed, broadcasting: 3\nI0423 13:16:17.589876 1108 log.go:172] (0xc000116fd0) (0xc0000d4000) Stream removed, broadcasting: 5\n" Apr 23 13:16:17.595: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 23 13:16:17.595: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 23 13:16:17.595: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 23 13:16:47.610: INFO: Deleting all statefulset in ns statefulset-5597 Apr 23 13:16:47.613: INFO: Scaling statefulset ss to 0 Apr 23 13:16:47.622: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 13:16:47.625: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:16:47.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5597" for this suite. Apr 23 13:16:53.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:16:53.734: INFO: namespace statefulset-5597 deletion completed in 6.095903902s • [SLOW TEST:98.504 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:16:53.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-2vtc STEP: Creating a pod to test atomic-volume-subpath Apr 23 13:16:53.834: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-2vtc" in namespace "subpath-8000" to be "success or failure" Apr 23 13:16:53.844: INFO: Pod "pod-subpath-test-projected-2vtc": Phase="Pending", Reason="", readiness=false. Elapsed: 9.96287ms Apr 23 13:16:55.848: INFO: Pod "pod-subpath-test-projected-2vtc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013630858s Apr 23 13:16:57.851: INFO: Pod "pod-subpath-test-projected-2vtc": Phase="Running", Reason="", readiness=true. Elapsed: 4.017408851s Apr 23 13:16:59.856: INFO: Pod "pod-subpath-test-projected-2vtc": Phase="Running", Reason="", readiness=true. Elapsed: 6.021913313s Apr 23 13:17:01.860: INFO: Pod "pod-subpath-test-projected-2vtc": Phase="Running", Reason="", readiness=true. Elapsed: 8.02577166s Apr 23 13:17:03.864: INFO: Pod "pod-subpath-test-projected-2vtc": Phase="Running", Reason="", readiness=true. Elapsed: 10.029583219s Apr 23 13:17:05.868: INFO: Pod "pod-subpath-test-projected-2vtc": Phase="Running", Reason="", readiness=true. Elapsed: 12.034212861s Apr 23 13:17:07.872: INFO: Pod "pod-subpath-test-projected-2vtc": Phase="Running", Reason="", readiness=true. Elapsed: 14.038461825s Apr 23 13:17:09.877: INFO: Pod "pod-subpath-test-projected-2vtc": Phase="Running", Reason="", readiness=true. Elapsed: 16.043098512s Apr 23 13:17:11.881: INFO: Pod "pod-subpath-test-projected-2vtc": Phase="Running", Reason="", readiness=true. Elapsed: 18.046921372s Apr 23 13:17:13.885: INFO: Pod "pod-subpath-test-projected-2vtc": Phase="Running", Reason="", readiness=true. Elapsed: 20.051525436s Apr 23 13:17:15.890: INFO: Pod "pod-subpath-test-projected-2vtc": Phase="Running", Reason="", readiness=true. Elapsed: 22.055737301s Apr 23 13:17:17.893: INFO: Pod "pod-subpath-test-projected-2vtc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.059062929s STEP: Saw pod success Apr 23 13:17:17.893: INFO: Pod "pod-subpath-test-projected-2vtc" satisfied condition "success or failure" Apr 23 13:17:17.895: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-2vtc container test-container-subpath-projected-2vtc: STEP: delete the pod Apr 23 13:17:17.938: INFO: Waiting for pod pod-subpath-test-projected-2vtc to disappear Apr 23 13:17:17.966: INFO: Pod pod-subpath-test-projected-2vtc no longer exists STEP: Deleting pod pod-subpath-test-projected-2vtc Apr 23 13:17:17.966: INFO: Deleting pod "pod-subpath-test-projected-2vtc" in namespace "subpath-8000" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:17:17.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8000" for this suite. Apr 23 13:17:23.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:17:24.054: INFO: namespace subpath-8000 deletion completed in 6.08131443s • [SLOW TEST:30.320 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:17:24.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-06c56e6e-020b-4467-8599-5af784f434ef STEP: Creating a pod to test consume secrets Apr 23 13:17:24.101: INFO: Waiting up to 5m0s for pod "pod-secrets-3f01f467-3a70-4361-88ad-cbc4c2a9230f" in namespace "secrets-3996" to be "success or failure" Apr 23 13:17:24.119: INFO: Pod "pod-secrets-3f01f467-3a70-4361-88ad-cbc4c2a9230f": Phase="Pending", Reason="", readiness=false. Elapsed: 17.626308ms Apr 23 13:17:26.123: INFO: Pod "pod-secrets-3f01f467-3a70-4361-88ad-cbc4c2a9230f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022062192s Apr 23 13:17:28.128: INFO: Pod "pod-secrets-3f01f467-3a70-4361-88ad-cbc4c2a9230f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026454087s STEP: Saw pod success Apr 23 13:17:28.128: INFO: Pod "pod-secrets-3f01f467-3a70-4361-88ad-cbc4c2a9230f" satisfied condition "success or failure" Apr 23 13:17:28.131: INFO: Trying to get logs from node iruya-worker pod pod-secrets-3f01f467-3a70-4361-88ad-cbc4c2a9230f container secret-volume-test: STEP: delete the pod Apr 23 13:17:28.150: INFO: Waiting for pod pod-secrets-3f01f467-3a70-4361-88ad-cbc4c2a9230f to disappear Apr 23 13:17:28.154: INFO: Pod pod-secrets-3f01f467-3a70-4361-88ad-cbc4c2a9230f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:17:28.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3996" for this suite. Apr 23 13:17:34.170: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:17:34.251: INFO: namespace secrets-3996 deletion completed in 6.092980419s • [SLOW TEST:10.196 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:17:34.251: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Apr 23 13:17:34.324: INFO: Waiting up to 5m0s for pod "var-expansion-380e1903-e164-4487-9984-ca38eba092d7" in namespace "var-expansion-7712" to be "success or failure" Apr 23 13:17:34.350: INFO: Pod "var-expansion-380e1903-e164-4487-9984-ca38eba092d7": Phase="Pending", Reason="", readiness=false. Elapsed: 25.204382ms Apr 23 13:17:36.353: INFO: Pod "var-expansion-380e1903-e164-4487-9984-ca38eba092d7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028613409s Apr 23 13:17:38.358: INFO: Pod "var-expansion-380e1903-e164-4487-9984-ca38eba092d7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033247555s STEP: Saw pod success Apr 23 13:17:38.358: INFO: Pod "var-expansion-380e1903-e164-4487-9984-ca38eba092d7" satisfied condition "success or failure" Apr 23 13:17:38.362: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-380e1903-e164-4487-9984-ca38eba092d7 container dapi-container: STEP: delete the pod Apr 23 13:17:38.396: INFO: Waiting for pod var-expansion-380e1903-e164-4487-9984-ca38eba092d7 to disappear Apr 23 13:17:38.421: INFO: Pod var-expansion-380e1903-e164-4487-9984-ca38eba092d7 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:17:38.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-7712" for this suite. Apr 23 13:17:44.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:17:44.515: INFO: namespace var-expansion-7712 deletion completed in 6.089835926s • [SLOW TEST:10.264 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:17:44.516: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 13:17:44.577: INFO: Waiting up to 5m0s for pod "downwardapi-volume-464e2828-fbca-48b4-bffa-95c0df5b5cdf" in namespace "projected-2265" to be "success or failure" Apr 23 13:17:44.627: INFO: Pod "downwardapi-volume-464e2828-fbca-48b4-bffa-95c0df5b5cdf": Phase="Pending", Reason="", readiness=false. Elapsed: 49.127855ms Apr 23 13:17:46.630: INFO: Pod "downwardapi-volume-464e2828-fbca-48b4-bffa-95c0df5b5cdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052397175s Apr 23 13:17:48.634: INFO: Pod "downwardapi-volume-464e2828-fbca-48b4-bffa-95c0df5b5cdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056541696s STEP: Saw pod success Apr 23 13:17:48.634: INFO: Pod "downwardapi-volume-464e2828-fbca-48b4-bffa-95c0df5b5cdf" satisfied condition "success or failure" Apr 23 13:17:48.637: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-464e2828-fbca-48b4-bffa-95c0df5b5cdf container client-container: STEP: delete the pod Apr 23 13:17:48.730: INFO: Waiting for pod downwardapi-volume-464e2828-fbca-48b4-bffa-95c0df5b5cdf to disappear Apr 23 13:17:48.749: INFO: Pod downwardapi-volume-464e2828-fbca-48b4-bffa-95c0df5b5cdf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:17:48.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2265" for this suite. Apr 23 13:17:54.771: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:17:54.838: INFO: namespace projected-2265 deletion completed in 6.086480788s • [SLOW TEST:10.322 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:17:54.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 23 13:17:54.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2717' Apr 23 13:17:55.036: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 23 13:17:55.036: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Apr 23 13:17:55.041: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Apr 23 13:17:55.056: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 23 13:17:55.158: INFO: scanned /root for discovery docs: Apr 23 13:17:55.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2717' Apr 23 13:18:10.994: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 23 13:18:10.995: INFO: stdout: "Created e2e-test-nginx-rc-d8018212f9287025d271415e156c5a89\nScaling up e2e-test-nginx-rc-d8018212f9287025d271415e156c5a89 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d8018212f9287025d271415e156c5a89 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d8018212f9287025d271415e156c5a89 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Apr 23 13:18:10.995: INFO: stdout: "Created e2e-test-nginx-rc-d8018212f9287025d271415e156c5a89\nScaling up e2e-test-nginx-rc-d8018212f9287025d271415e156c5a89 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-d8018212f9287025d271415e156c5a89 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-d8018212f9287025d271415e156c5a89 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Apr 23 13:18:10.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-2717' Apr 23 13:18:11.087: INFO: stderr: "" Apr 23 13:18:11.087: INFO: stdout: "e2e-test-nginx-rc-d8018212f9287025d271415e156c5a89-t2pd4 " Apr 23 13:18:11.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d8018212f9287025d271415e156c5a89-t2pd4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2717' Apr 23 13:18:11.193: INFO: stderr: "" Apr 23 13:18:11.193: INFO: stdout: "true" Apr 23 13:18:11.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-d8018212f9287025d271415e156c5a89-t2pd4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2717' Apr 23 13:18:11.295: INFO: stderr: "" Apr 23 13:18:11.295: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Apr 23 13:18:11.295: INFO: e2e-test-nginx-rc-d8018212f9287025d271415e156c5a89-t2pd4 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Apr 23 13:18:11.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2717' Apr 23 13:18:11.394: INFO: stderr: "" Apr 23 13:18:11.394: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:18:11.394: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2717" for this suite. Apr 23 13:18:33.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:18:33.522: INFO: namespace kubectl-2717 deletion completed in 22.123477879s • [SLOW TEST:38.684 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:18:33.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:18:33.571: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 23 13:18:33.589: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 23 13:18:38.593: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 23 13:18:38.593: INFO: Creating deployment "test-rolling-update-deployment" Apr 23 13:18:38.597: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 23 13:18:38.620: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 23 13:18:40.628: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 23 13:18:40.631: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723244718, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723244718, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723244718, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723244718, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 13:18:42.636: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 23 13:18:42.646: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-9911,SelfLink:/apis/apps/v1/namespaces/deployment-9911/deployments/test-rolling-update-deployment,UID:86b67c32-e3ce-47eb-85c5-cc5d49775ea9,ResourceVersion:7000272,Generation:1,CreationTimestamp:2020-04-23 13:18:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-23 13:18:38 +0000 UTC 2020-04-23 13:18:38 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-23 13:18:42 +0000 UTC 2020-04-23 13:18:38 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 23 13:18:42.649: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-9911,SelfLink:/apis/apps/v1/namespaces/deployment-9911/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:a21a7003-49bc-4228-b282-c36b8278fa0d,ResourceVersion:7000261,Generation:1,CreationTimestamp:2020-04-23 13:18:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 86b67c32-e3ce-47eb-85c5-cc5d49775ea9 0xc00278c4c7 0xc00278c4c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 23 13:18:42.649: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 23 13:18:42.649: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-9911,SelfLink:/apis/apps/v1/namespaces/deployment-9911/replicasets/test-rolling-update-controller,UID:36766c43-a7bd-4f93-bfce-dc32bdb30d50,ResourceVersion:7000270,Generation:2,CreationTimestamp:2020-04-23 13:18:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 86b67c32-e3ce-47eb-85c5-cc5d49775ea9 0xc00278c3a7 0xc00278c3a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 23 13:18:42.652: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-qc9fw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-qc9fw,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-9911,SelfLink:/api/v1/namespaces/deployment-9911/pods/test-rolling-update-deployment-79f6b9d75c-qc9fw,UID:016b854b-9132-47fe-892b-f6b154f2608d,ResourceVersion:7000260,Generation:0,CreationTimestamp:2020-04-23 13:18:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c a21a7003-49bc-4228-b282-c36b8278fa0d 0xc00278d3c7 0xc00278d3c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-54nw6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-54nw6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-54nw6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00278d440} {node.kubernetes.io/unreachable Exists NoExecute 0xc00278d460}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:18:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:18:42 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:18:42 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:18:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.61,StartTime:2020-04-23 13:18:38 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-23 13:18:41 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://cbfc03f3faafd7f3a2cce9ee71a8772e2ab0553420281b32af7d481797198e64}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:18:42.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9911" for this suite. Apr 23 13:18:48.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:18:48.740: INFO: namespace deployment-9911 deletion completed in 6.085558557s • [SLOW TEST:15.218 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:18:48.741: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-kb8v STEP: Creating a pod to test atomic-volume-subpath Apr 23 13:18:48.836: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-kb8v" in namespace "subpath-4809" to be "success or failure" Apr 23 13:18:48.841: INFO: Pod "pod-subpath-test-configmap-kb8v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.524828ms Apr 23 13:18:50.845: INFO: Pod "pod-subpath-test-configmap-kb8v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008561645s Apr 23 13:18:52.850: INFO: Pod "pod-subpath-test-configmap-kb8v": Phase="Running", Reason="", readiness=true. Elapsed: 4.013331319s Apr 23 13:18:54.854: INFO: Pod "pod-subpath-test-configmap-kb8v": Phase="Running", Reason="", readiness=true. Elapsed: 6.017533536s Apr 23 13:18:56.858: INFO: Pod "pod-subpath-test-configmap-kb8v": Phase="Running", Reason="", readiness=true. Elapsed: 8.021686246s Apr 23 13:18:58.862: INFO: Pod "pod-subpath-test-configmap-kb8v": Phase="Running", Reason="", readiness=true. Elapsed: 10.025626073s Apr 23 13:19:00.866: INFO: Pod "pod-subpath-test-configmap-kb8v": Phase="Running", Reason="", readiness=true. Elapsed: 12.029648094s Apr 23 13:19:02.870: INFO: Pod "pod-subpath-test-configmap-kb8v": Phase="Running", Reason="", readiness=true. Elapsed: 14.033972427s Apr 23 13:19:04.874: INFO: Pod "pod-subpath-test-configmap-kb8v": Phase="Running", Reason="", readiness=true. Elapsed: 16.03788023s Apr 23 13:19:06.879: INFO: Pod "pod-subpath-test-configmap-kb8v": Phase="Running", Reason="", readiness=true. Elapsed: 18.042579331s Apr 23 13:19:08.883: INFO: Pod "pod-subpath-test-configmap-kb8v": Phase="Running", Reason="", readiness=true. Elapsed: 20.04664535s Apr 23 13:19:10.887: INFO: Pod "pod-subpath-test-configmap-kb8v": Phase="Running", Reason="", readiness=true. Elapsed: 22.050846426s Apr 23 13:19:12.892: INFO: Pod "pod-subpath-test-configmap-kb8v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.055122795s STEP: Saw pod success Apr 23 13:19:12.892: INFO: Pod "pod-subpath-test-configmap-kb8v" satisfied condition "success or failure" Apr 23 13:19:12.895: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-kb8v container test-container-subpath-configmap-kb8v: STEP: delete the pod Apr 23 13:19:12.927: INFO: Waiting for pod pod-subpath-test-configmap-kb8v to disappear Apr 23 13:19:12.931: INFO: Pod pod-subpath-test-configmap-kb8v no longer exists STEP: Deleting pod pod-subpath-test-configmap-kb8v Apr 23 13:19:12.931: INFO: Deleting pod "pod-subpath-test-configmap-kb8v" in namespace "subpath-4809" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:19:12.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4809" for this suite. Apr 23 13:19:18.947: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:19:19.022: INFO: namespace subpath-4809 deletion completed in 6.085362752s • [SLOW TEST:30.281 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:19:19.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Apr 23 13:19:19.104: INFO: Waiting up to 5m0s for pod "client-containers-f74b68b1-53dd-4e7c-81d8-214a6cf62de6" in namespace "containers-7064" to be "success or failure" Apr 23 13:19:19.108: INFO: Pod "client-containers-f74b68b1-53dd-4e7c-81d8-214a6cf62de6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.324788ms Apr 23 13:19:21.136: INFO: Pod "client-containers-f74b68b1-53dd-4e7c-81d8-214a6cf62de6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031116895s Apr 23 13:19:23.141: INFO: Pod "client-containers-f74b68b1-53dd-4e7c-81d8-214a6cf62de6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037075206s STEP: Saw pod success Apr 23 13:19:23.142: INFO: Pod "client-containers-f74b68b1-53dd-4e7c-81d8-214a6cf62de6" satisfied condition "success or failure" Apr 23 13:19:23.144: INFO: Trying to get logs from node iruya-worker pod client-containers-f74b68b1-53dd-4e7c-81d8-214a6cf62de6 container test-container: STEP: delete the pod Apr 23 13:19:23.164: INFO: Waiting for pod client-containers-f74b68b1-53dd-4e7c-81d8-214a6cf62de6 to disappear Apr 23 13:19:23.168: INFO: Pod client-containers-f74b68b1-53dd-4e7c-81d8-214a6cf62de6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:19:23.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7064" for this suite. Apr 23 13:19:29.184: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:19:29.264: INFO: namespace containers-7064 deletion completed in 6.093336354s • [SLOW TEST:10.242 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:19:29.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 23 13:19:29.325: INFO: Waiting up to 5m0s for pod "pod-789f8fef-0be0-4535-9492-22cd62bfc3a2" in namespace "emptydir-5423" to be "success or failure" Apr 23 13:19:29.343: INFO: Pod "pod-789f8fef-0be0-4535-9492-22cd62bfc3a2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.370992ms Apr 23 13:19:31.347: INFO: Pod "pod-789f8fef-0be0-4535-9492-22cd62bfc3a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021832502s Apr 23 13:19:33.352: INFO: Pod "pod-789f8fef-0be0-4535-9492-22cd62bfc3a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02614072s STEP: Saw pod success Apr 23 13:19:33.352: INFO: Pod "pod-789f8fef-0be0-4535-9492-22cd62bfc3a2" satisfied condition "success or failure" Apr 23 13:19:33.355: INFO: Trying to get logs from node iruya-worker2 pod pod-789f8fef-0be0-4535-9492-22cd62bfc3a2 container test-container: STEP: delete the pod Apr 23 13:19:33.374: INFO: Waiting for pod pod-789f8fef-0be0-4535-9492-22cd62bfc3a2 to disappear Apr 23 13:19:33.423: INFO: Pod pod-789f8fef-0be0-4535-9492-22cd62bfc3a2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:19:33.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5423" for this suite. Apr 23 13:19:39.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:19:39.515: INFO: namespace emptydir-5423 deletion completed in 6.088912111s • [SLOW TEST:10.250 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:19:39.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:19:39.555: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:19:43.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5070" for this suite. Apr 23 13:20:33.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:20:33.705: INFO: namespace pods-5070 deletion completed in 50.088012782s • [SLOW TEST:54.190 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:20:33.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-eb9a3413-20a2-4fb6-b657-339f1c0a70f8 in namespace container-probe-9694 Apr 23 13:20:37.769: INFO: Started pod busybox-eb9a3413-20a2-4fb6-b657-339f1c0a70f8 in namespace container-probe-9694 STEP: checking the pod's current state and verifying that restartCount is present Apr 23 13:20:37.772: INFO: Initial restart count of pod busybox-eb9a3413-20a2-4fb6-b657-339f1c0a70f8 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:24:39.263: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9694" for this suite. Apr 23 13:24:45.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:24:45.405: INFO: namespace container-probe-9694 deletion completed in 6.136463843s • [SLOW TEST:251.700 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:24:45.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 23 13:24:49.518: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:24:49.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6913" for this suite. Apr 23 13:24:55.581: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:24:55.702: INFO: namespace container-runtime-6913 deletion completed in 6.132816604s • [SLOW TEST:10.297 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:24:55.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Apr 23 13:24:55.794: INFO: Waiting up to 5m0s for pod "client-containers-4295f457-08bc-43ee-9326-269fd61b3ab9" in namespace "containers-130" to be "success or failure" Apr 23 13:24:55.806: INFO: Pod "client-containers-4295f457-08bc-43ee-9326-269fd61b3ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 12.044809ms Apr 23 13:24:57.827: INFO: Pod "client-containers-4295f457-08bc-43ee-9326-269fd61b3ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0332327s Apr 23 13:24:59.831: INFO: Pod "client-containers-4295f457-08bc-43ee-9326-269fd61b3ab9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036854542s STEP: Saw pod success Apr 23 13:24:59.831: INFO: Pod "client-containers-4295f457-08bc-43ee-9326-269fd61b3ab9" satisfied condition "success or failure" Apr 23 13:24:59.833: INFO: Trying to get logs from node iruya-worker pod client-containers-4295f457-08bc-43ee-9326-269fd61b3ab9 container test-container: STEP: delete the pod Apr 23 13:24:59.849: INFO: Waiting for pod client-containers-4295f457-08bc-43ee-9326-269fd61b3ab9 to disappear Apr 23 13:24:59.853: INFO: Pod client-containers-4295f457-08bc-43ee-9326-269fd61b3ab9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:24:59.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-130" for this suite. Apr 23 13:25:05.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:25:05.947: INFO: namespace containers-130 deletion completed in 6.091115163s • [SLOW TEST:10.245 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:25:05.948: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:25:12.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7893" for this suite. Apr 23 13:25:54.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:25:54.148: INFO: namespace kubelet-test-7893 deletion completed in 42.093908187s • [SLOW TEST:48.201 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:25:54.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 23 13:25:54.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-2007' Apr 23 13:25:56.457: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 23 13:25:56.458: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Apr 23 13:25:56.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-2007' Apr 23 13:25:56.596: INFO: stderr: "" Apr 23 13:25:56.596: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:25:56.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2007" for this suite. Apr 23 13:26:02.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:26:02.715: INFO: namespace kubectl-2007 deletion completed in 6.104817783s • [SLOW TEST:8.566 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:26:02.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-da7fa38e-07d3-4a2e-8c00-cd1f6253b0ac STEP: Creating configMap with name cm-test-opt-upd-8f1b4e4d-3a1f-430a-a7c7-52601b7f6c66 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-da7fa38e-07d3-4a2e-8c00-cd1f6253b0ac STEP: Updating configmap cm-test-opt-upd-8f1b4e4d-3a1f-430a-a7c7-52601b7f6c66 STEP: Creating configMap with name cm-test-opt-create-f7cb8d28-eaaa-4d0c-b592-7e317ebc5600 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:26:10.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6048" for this suite. Apr 23 13:26:33.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:26:33.077: INFO: namespace configmap-6048 deletion completed in 22.083118822s • [SLOW TEST:30.361 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:26:33.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-0b23d21d-9007-427a-a615-2e3015d2d17e STEP: Creating a pod to test consume configMaps Apr 23 13:26:33.156: INFO: Waiting up to 5m0s for pod "pod-configmaps-0ebde468-3af3-42bf-8cca-977dc1636f6f" in namespace "configmap-9073" to be "success or failure" Apr 23 13:26:33.161: INFO: Pod "pod-configmaps-0ebde468-3af3-42bf-8cca-977dc1636f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.386475ms Apr 23 13:26:35.165: INFO: Pod "pod-configmaps-0ebde468-3af3-42bf-8cca-977dc1636f6f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008830262s Apr 23 13:26:37.170: INFO: Pod "pod-configmaps-0ebde468-3af3-42bf-8cca-977dc1636f6f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01313274s STEP: Saw pod success Apr 23 13:26:37.170: INFO: Pod "pod-configmaps-0ebde468-3af3-42bf-8cca-977dc1636f6f" satisfied condition "success or failure" Apr 23 13:26:37.172: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-0ebde468-3af3-42bf-8cca-977dc1636f6f container configmap-volume-test: STEP: delete the pod Apr 23 13:26:37.192: INFO: Waiting for pod pod-configmaps-0ebde468-3af3-42bf-8cca-977dc1636f6f to disappear Apr 23 13:26:37.197: INFO: Pod pod-configmaps-0ebde468-3af3-42bf-8cca-977dc1636f6f no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:26:37.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9073" for this suite. Apr 23 13:26:43.213: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:26:43.283: INFO: namespace configmap-9073 deletion completed in 6.083421245s • [SLOW TEST:10.206 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:26:43.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5740.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5740.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 23 13:26:49.398: INFO: DNS probes using dns-5740/dns-test-d6250619-4bbe-4691-9fb5-a7166403e017 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:26:49.455: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5740" for this suite. Apr 23 13:26:55.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:26:55.558: INFO: namespace dns-5740 deletion completed in 6.095199484s • [SLOW TEST:12.275 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:26:55.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-626ed4ae-0da5-4e87-bcce-b4c04cd8dde4 STEP: Creating a pod to test consume configMaps Apr 23 13:26:55.632: INFO: Waiting up to 5m0s for pod "pod-configmaps-18ad89de-0809-4b11-92ab-1283c4fbe2f3" in namespace "configmap-1608" to be "success or failure" Apr 23 13:26:55.636: INFO: Pod "pod-configmaps-18ad89de-0809-4b11-92ab-1283c4fbe2f3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.679115ms Apr 23 13:26:57.640: INFO: Pod "pod-configmaps-18ad89de-0809-4b11-92ab-1283c4fbe2f3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00796616s Apr 23 13:26:59.645: INFO: Pod "pod-configmaps-18ad89de-0809-4b11-92ab-1283c4fbe2f3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012470707s STEP: Saw pod success Apr 23 13:26:59.645: INFO: Pod "pod-configmaps-18ad89de-0809-4b11-92ab-1283c4fbe2f3" satisfied condition "success or failure" Apr 23 13:26:59.648: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-18ad89de-0809-4b11-92ab-1283c4fbe2f3 container configmap-volume-test: STEP: delete the pod Apr 23 13:26:59.667: INFO: Waiting for pod pod-configmaps-18ad89de-0809-4b11-92ab-1283c4fbe2f3 to disappear Apr 23 13:26:59.717: INFO: Pod pod-configmaps-18ad89de-0809-4b11-92ab-1283c4fbe2f3 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:26:59.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1608" for this suite. Apr 23 13:27:05.813: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:27:05.912: INFO: namespace configmap-1608 deletion completed in 6.19246561s • [SLOW TEST:10.354 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:27:05.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-7b547f14-a422-495f-849c-f3637088f9c4 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:27:10.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9593" for this suite. Apr 23 13:27:32.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:27:32.246: INFO: namespace configmap-9593 deletion completed in 22.126798149s • [SLOW TEST:26.333 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:27:32.247: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 23 13:27:32.303: INFO: Waiting up to 5m0s for pod "downward-api-ae318256-e25b-4689-add6-25395eccae38" in namespace "downward-api-6582" to be "success or failure" Apr 23 13:27:32.313: INFO: Pod "downward-api-ae318256-e25b-4689-add6-25395eccae38": Phase="Pending", Reason="", readiness=false. Elapsed: 10.256381ms Apr 23 13:27:34.317: INFO: Pod "downward-api-ae318256-e25b-4689-add6-25395eccae38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014609525s Apr 23 13:27:36.322: INFO: Pod "downward-api-ae318256-e25b-4689-add6-25395eccae38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018879689s STEP: Saw pod success Apr 23 13:27:36.322: INFO: Pod "downward-api-ae318256-e25b-4689-add6-25395eccae38" satisfied condition "success or failure" Apr 23 13:27:36.324: INFO: Trying to get logs from node iruya-worker2 pod downward-api-ae318256-e25b-4689-add6-25395eccae38 container dapi-container: STEP: delete the pod Apr 23 13:27:36.356: INFO: Waiting for pod downward-api-ae318256-e25b-4689-add6-25395eccae38 to disappear Apr 23 13:27:36.366: INFO: Pod downward-api-ae318256-e25b-4689-add6-25395eccae38 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:27:36.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6582" for this suite. Apr 23 13:27:42.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:27:42.474: INFO: namespace downward-api-6582 deletion completed in 6.105482644s • [SLOW TEST:10.228 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:27:42.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 23 13:27:42.558: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 23 13:27:42.580: INFO: Waiting for terminating namespaces to be deleted... Apr 23 13:27:42.583: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 23 13:27:42.587: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 23 13:27:42.587: INFO: Container kube-proxy ready: true, restart count 0 Apr 23 13:27:42.587: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 23 13:27:42.587: INFO: Container kindnet-cni ready: true, restart count 0 Apr 23 13:27:42.587: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 23 13:27:42.592: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 23 13:27:42.592: INFO: Container coredns ready: true, restart count 0 Apr 23 13:27:42.592: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 23 13:27:42.592: INFO: Container coredns ready: true, restart count 0 Apr 23 13:27:42.592: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 23 13:27:42.592: INFO: Container kube-proxy ready: true, restart count 0 Apr 23 13:27:42.592: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 23 13:27:42.592: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160875dd71eeb137], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:27:43.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-950" for this suite. Apr 23 13:27:49.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:27:49.704: INFO: namespace sched-pred-950 deletion completed in 6.087600064s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.229 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:27:49.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-sfs6 STEP: Creating a pod to test atomic-volume-subpath Apr 23 13:27:49.805: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-sfs6" in namespace "subpath-6532" to be "success or failure" Apr 23 13:27:49.819: INFO: Pod "pod-subpath-test-downwardapi-sfs6": Phase="Pending", Reason="", readiness=false. Elapsed: 14.524744ms Apr 23 13:27:51.824: INFO: Pod "pod-subpath-test-downwardapi-sfs6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019380776s Apr 23 13:27:53.829: INFO: Pod "pod-subpath-test-downwardapi-sfs6": Phase="Running", Reason="", readiness=true. Elapsed: 4.024349962s Apr 23 13:27:55.833: INFO: Pod "pod-subpath-test-downwardapi-sfs6": Phase="Running", Reason="", readiness=true. Elapsed: 6.028260711s Apr 23 13:27:57.842: INFO: Pod "pod-subpath-test-downwardapi-sfs6": Phase="Running", Reason="", readiness=true. Elapsed: 8.037724636s Apr 23 13:27:59.847: INFO: Pod "pod-subpath-test-downwardapi-sfs6": Phase="Running", Reason="", readiness=true. Elapsed: 10.042225886s Apr 23 13:28:01.851: INFO: Pod "pod-subpath-test-downwardapi-sfs6": Phase="Running", Reason="", readiness=true. Elapsed: 12.0468962s Apr 23 13:28:03.856: INFO: Pod "pod-subpath-test-downwardapi-sfs6": Phase="Running", Reason="", readiness=true. Elapsed: 14.0514562s Apr 23 13:28:05.860: INFO: Pod "pod-subpath-test-downwardapi-sfs6": Phase="Running", Reason="", readiness=true. Elapsed: 16.055533654s Apr 23 13:28:07.865: INFO: Pod "pod-subpath-test-downwardapi-sfs6": Phase="Running", Reason="", readiness=true. Elapsed: 18.06016371s Apr 23 13:28:09.870: INFO: Pod "pod-subpath-test-downwardapi-sfs6": Phase="Running", Reason="", readiness=true. Elapsed: 20.065201288s Apr 23 13:28:11.874: INFO: Pod "pod-subpath-test-downwardapi-sfs6": Phase="Running", Reason="", readiness=true. Elapsed: 22.068997017s Apr 23 13:28:13.878: INFO: Pod "pod-subpath-test-downwardapi-sfs6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.073085015s STEP: Saw pod success Apr 23 13:28:13.878: INFO: Pod "pod-subpath-test-downwardapi-sfs6" satisfied condition "success or failure" Apr 23 13:28:13.881: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-sfs6 container test-container-subpath-downwardapi-sfs6: STEP: delete the pod Apr 23 13:28:13.901: INFO: Waiting for pod pod-subpath-test-downwardapi-sfs6 to disappear Apr 23 13:28:13.951: INFO: Pod pod-subpath-test-downwardapi-sfs6 no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-sfs6 Apr 23 13:28:13.951: INFO: Deleting pod "pod-subpath-test-downwardapi-sfs6" in namespace "subpath-6532" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:28:13.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6532" for this suite. Apr 23 13:28:19.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:28:20.062: INFO: namespace subpath-6532 deletion completed in 6.099313214s • [SLOW TEST:30.358 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:28:20.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 23 13:28:20.763: INFO: Pod name wrapped-volume-race-970c9396-8de0-4da1-932a-ec020aa598d0: Found 0 pods out of 5 Apr 23 13:28:25.772: INFO: Pod name wrapped-volume-race-970c9396-8de0-4da1-932a-ec020aa598d0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-970c9396-8de0-4da1-932a-ec020aa598d0 in namespace emptydir-wrapper-538, will wait for the garbage collector to delete the pods Apr 23 13:28:39.857: INFO: Deleting ReplicationController wrapped-volume-race-970c9396-8de0-4da1-932a-ec020aa598d0 took: 8.547489ms Apr 23 13:28:40.157: INFO: Terminating ReplicationController wrapped-volume-race-970c9396-8de0-4da1-932a-ec020aa598d0 pods took: 300.29557ms STEP: Creating RC which spawns configmap-volume pods Apr 23 13:29:23.288: INFO: Pod name wrapped-volume-race-97c5099b-7732-4d51-b02b-22963c8946ab: Found 0 pods out of 5 Apr 23 13:29:28.554: INFO: Pod name wrapped-volume-race-97c5099b-7732-4d51-b02b-22963c8946ab: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-97c5099b-7732-4d51-b02b-22963c8946ab in namespace emptydir-wrapper-538, will wait for the garbage collector to delete the pods Apr 23 13:29:42.662: INFO: Deleting ReplicationController wrapped-volume-race-97c5099b-7732-4d51-b02b-22963c8946ab took: 29.848782ms Apr 23 13:29:42.962: INFO: Terminating ReplicationController wrapped-volume-race-97c5099b-7732-4d51-b02b-22963c8946ab pods took: 300.271711ms STEP: Creating RC which spawns configmap-volume pods Apr 23 13:30:22.290: INFO: Pod name wrapped-volume-race-6ad1f1f1-44fb-47f3-ba0b-b84edcff5b98: Found 0 pods out of 5 Apr 23 13:30:27.299: INFO: Pod name wrapped-volume-race-6ad1f1f1-44fb-47f3-ba0b-b84edcff5b98: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-6ad1f1f1-44fb-47f3-ba0b-b84edcff5b98 in namespace emptydir-wrapper-538, will wait for the garbage collector to delete the pods Apr 23 13:30:41.442: INFO: Deleting ReplicationController wrapped-volume-race-6ad1f1f1-44fb-47f3-ba0b-b84edcff5b98 took: 72.781316ms Apr 23 13:30:41.742: INFO: Terminating ReplicationController wrapped-volume-race-6ad1f1f1-44fb-47f3-ba0b-b84edcff5b98 pods took: 300.318702ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:31:23.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-538" for this suite. Apr 23 13:31:31.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:31:31.909: INFO: namespace emptydir-wrapper-538 deletion completed in 8.145158971s • [SLOW TEST:191.847 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:31:31.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:31:38.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-6045" for this suite. Apr 23 13:31:44.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:31:44.318: INFO: namespace namespaces-6045 deletion completed in 6.098027869s STEP: Destroying namespace "nsdeletetest-2366" for this suite. Apr 23 13:31:44.321: INFO: Namespace nsdeletetest-2366 was already deleted STEP: Destroying namespace "nsdeletetest-7899" for this suite. Apr 23 13:31:50.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:31:50.420: INFO: namespace nsdeletetest-7899 deletion completed in 6.098889342s • [SLOW TEST:18.510 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:31:50.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-b100bd7f-6197-471e-ba20-fa43dd5e1d4a STEP: Creating a pod to test consume secrets Apr 23 13:31:50.504: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f4beebcf-6e19-4163-a60f-b59071f02bdb" in namespace "projected-1096" to be "success or failure" Apr 23 13:31:50.595: INFO: Pod "pod-projected-secrets-f4beebcf-6e19-4163-a60f-b59071f02bdb": Phase="Pending", Reason="", readiness=false. Elapsed: 91.402791ms Apr 23 13:31:52.598: INFO: Pod "pod-projected-secrets-f4beebcf-6e19-4163-a60f-b59071f02bdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094751762s Apr 23 13:31:54.603: INFO: Pod "pod-projected-secrets-f4beebcf-6e19-4163-a60f-b59071f02bdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.099517211s STEP: Saw pod success Apr 23 13:31:54.603: INFO: Pod "pod-projected-secrets-f4beebcf-6e19-4163-a60f-b59071f02bdb" satisfied condition "success or failure" Apr 23 13:31:54.606: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-f4beebcf-6e19-4163-a60f-b59071f02bdb container projected-secret-volume-test: STEP: delete the pod Apr 23 13:31:54.642: INFO: Waiting for pod pod-projected-secrets-f4beebcf-6e19-4163-a60f-b59071f02bdb to disappear Apr 23 13:31:54.656: INFO: Pod pod-projected-secrets-f4beebcf-6e19-4163-a60f-b59071f02bdb no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:31:54.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1096" for this suite. Apr 23 13:32:00.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:32:00.765: INFO: namespace projected-1096 deletion completed in 6.106061233s • [SLOW TEST:10.345 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:32:00.766: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 23 13:32:00.836: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:32:08.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4650" for this suite. Apr 23 13:32:30.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:32:30.844: INFO: namespace init-container-4650 deletion completed in 22.10472574s • [SLOW TEST:30.078 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:32:30.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0423 13:33:10.910752 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 23 13:33:10.910: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:33:10.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2060" for this suite. Apr 23 13:33:20.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:33:20.994: INFO: namespace gc-2060 deletion completed in 10.080944698s • [SLOW TEST:50.149 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:33:20.995: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 23 13:33:21.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2516' Apr 23 13:33:21.368: INFO: stderr: "" Apr 23 13:33:21.368: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 23 13:33:22.373: INFO: Selector matched 1 pods for map[app:redis] Apr 23 13:33:22.373: INFO: Found 0 / 1 Apr 23 13:33:23.373: INFO: Selector matched 1 pods for map[app:redis] Apr 23 13:33:23.373: INFO: Found 0 / 1 Apr 23 13:33:24.372: INFO: Selector matched 1 pods for map[app:redis] Apr 23 13:33:24.372: INFO: Found 0 / 1 Apr 23 13:33:25.373: INFO: Selector matched 1 pods for map[app:redis] Apr 23 13:33:25.373: INFO: Found 1 / 1 Apr 23 13:33:25.373: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 23 13:33:25.377: INFO: Selector matched 1 pods for map[app:redis] Apr 23 13:33:25.377: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 23 13:33:25.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-2wsdf --namespace=kubectl-2516 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 23 13:33:25.483: INFO: stderr: "" Apr 23 13:33:25.483: INFO: stdout: "pod/redis-master-2wsdf patched\n" STEP: checking annotations Apr 23 13:33:25.503: INFO: Selector matched 1 pods for map[app:redis] Apr 23 13:33:25.503: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:33:25.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2516" for this suite. Apr 23 13:33:47.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:33:47.590: INFO: namespace kubectl-2516 deletion completed in 22.083470509s • [SLOW TEST:26.595 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:33:47.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:33:47.646: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:33:48.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7802" for this suite. Apr 23 13:33:54.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:33:54.885: INFO: namespace custom-resource-definition-7802 deletion completed in 6.162208827s • [SLOW TEST:7.293 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:33:54.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 23 13:34:03.213: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 23 13:34:03.221: INFO: Pod pod-with-poststart-http-hook still exists Apr 23 13:34:05.221: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 23 13:34:05.225: INFO: Pod pod-with-poststart-http-hook still exists Apr 23 13:34:07.221: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 23 13:34:07.225: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:34:07.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7279" for this suite. Apr 23 13:34:29.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:34:29.317: INFO: namespace container-lifecycle-hook-7279 deletion completed in 22.088303886s • [SLOW TEST:34.433 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:34:29.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 23 13:34:29.418: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:34:29.420: INFO: Number of nodes with available pods: 0 Apr 23 13:34:29.420: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:34:30.425: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:34:30.428: INFO: Number of nodes with available pods: 0 Apr 23 13:34:30.428: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:34:31.425: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:34:31.429: INFO: Number of nodes with available pods: 0 Apr 23 13:34:31.429: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:34:32.442: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:34:32.445: INFO: Number of nodes with available pods: 0 Apr 23 13:34:32.445: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:34:33.426: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:34:33.430: INFO: Number of nodes with available pods: 1 Apr 23 13:34:33.430: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:34:34.425: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:34:34.429: INFO: Number of nodes with available pods: 2 Apr 23 13:34:34.429: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 23 13:34:34.450: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:34:34.473: INFO: Number of nodes with available pods: 1 Apr 23 13:34:34.473: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:34:35.478: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:34:35.481: INFO: Number of nodes with available pods: 1 Apr 23 13:34:35.481: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:34:36.495: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:34:36.499: INFO: Number of nodes with available pods: 1 Apr 23 13:34:36.499: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:34:37.479: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:34:37.483: INFO: Number of nodes with available pods: 2 Apr 23 13:34:37.483: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-152, will wait for the garbage collector to delete the pods Apr 23 13:34:37.545: INFO: Deleting DaemonSet.extensions daemon-set took: 4.617727ms Apr 23 13:34:37.846: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.225294ms Apr 23 13:34:52.256: INFO: Number of nodes with available pods: 0 Apr 23 13:34:52.256: INFO: Number of running nodes: 0, number of available pods: 0 Apr 23 13:34:52.259: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-152/daemonsets","resourceVersion":"7003919"},"items":null} Apr 23 13:34:52.261: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-152/pods","resourceVersion":"7003919"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:34:52.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-152" for this suite. Apr 23 13:34:58.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:34:58.363: INFO: namespace daemonsets-152 deletion completed in 6.08873719s • [SLOW TEST:29.046 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:34:58.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Apr 23 13:34:58.456: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-3205" to be "success or failure" Apr 23 13:34:58.475: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 19.779752ms Apr 23 13:35:00.479: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023787042s Apr 23 13:35:02.484: INFO: Pod "pod-host-path-test": Phase="Running", Reason="", readiness=false. Elapsed: 4.02800418s Apr 23 13:35:04.488: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.032904912s STEP: Saw pod success Apr 23 13:35:04.489: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 23 13:35:04.492: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 23 13:35:04.512: INFO: Waiting for pod pod-host-path-test to disappear Apr 23 13:35:04.517: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:35:04.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-3205" for this suite. Apr 23 13:35:10.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:35:10.648: INFO: namespace hostpath-3205 deletion completed in 6.110591443s • [SLOW TEST:12.285 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:35:10.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 13:35:10.700: INFO: Waiting up to 5m0s for pod "downwardapi-volume-243d107f-a218-4d58-a48f-f0c5f604de3a" in namespace "projected-7317" to be "success or failure" Apr 23 13:35:10.703: INFO: Pod "downwardapi-volume-243d107f-a218-4d58-a48f-f0c5f604de3a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.326179ms Apr 23 13:35:12.707: INFO: Pod "downwardapi-volume-243d107f-a218-4d58-a48f-f0c5f604de3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007401046s Apr 23 13:35:14.712: INFO: Pod "downwardapi-volume-243d107f-a218-4d58-a48f-f0c5f604de3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01168699s STEP: Saw pod success Apr 23 13:35:14.712: INFO: Pod "downwardapi-volume-243d107f-a218-4d58-a48f-f0c5f604de3a" satisfied condition "success or failure" Apr 23 13:35:14.715: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-243d107f-a218-4d58-a48f-f0c5f604de3a container client-container: STEP: delete the pod Apr 23 13:35:14.753: INFO: Waiting for pod downwardapi-volume-243d107f-a218-4d58-a48f-f0c5f604de3a to disappear Apr 23 13:35:14.773: INFO: Pod downwardapi-volume-243d107f-a218-4d58-a48f-f0c5f604de3a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:35:14.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7317" for this suite. Apr 23 13:35:20.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:35:20.869: INFO: namespace projected-7317 deletion completed in 6.092392273s • [SLOW TEST:10.221 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:35:20.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:35:25.093: INFO: Waiting up to 5m0s for pod "client-envvars-bddc113d-bc77-4aa7-add3-e6f9dd61b34e" in namespace "pods-5688" to be "success or failure" Apr 23 13:35:25.099: INFO: Pod "client-envvars-bddc113d-bc77-4aa7-add3-e6f9dd61b34e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.604519ms Apr 23 13:35:27.102: INFO: Pod "client-envvars-bddc113d-bc77-4aa7-add3-e6f9dd61b34e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009260911s Apr 23 13:35:29.106: INFO: Pod "client-envvars-bddc113d-bc77-4aa7-add3-e6f9dd61b34e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012882653s STEP: Saw pod success Apr 23 13:35:29.106: INFO: Pod "client-envvars-bddc113d-bc77-4aa7-add3-e6f9dd61b34e" satisfied condition "success or failure" Apr 23 13:35:29.108: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-bddc113d-bc77-4aa7-add3-e6f9dd61b34e container env3cont: STEP: delete the pod Apr 23 13:35:29.149: INFO: Waiting for pod client-envvars-bddc113d-bc77-4aa7-add3-e6f9dd61b34e to disappear Apr 23 13:35:29.189: INFO: Pod client-envvars-bddc113d-bc77-4aa7-add3-e6f9dd61b34e no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:35:29.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5688" for this suite. Apr 23 13:36:15.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:36:15.299: INFO: namespace pods-5688 deletion completed in 46.105780155s • [SLOW TEST:54.429 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:36:15.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 23 13:36:25.413: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4702 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 13:36:25.413: INFO: >>> kubeConfig: /root/.kube/config I0423 13:36:25.453938 6 log.go:172] (0xc000dfc580) (0xc000984b40) Create stream I0423 13:36:25.453971 6 log.go:172] (0xc000dfc580) (0xc000984b40) Stream added, broadcasting: 1 I0423 13:36:25.456073 6 log.go:172] (0xc000dfc580) Reply frame received for 1 I0423 13:36:25.456110 6 log.go:172] (0xc000dfc580) (0xc00114a5a0) Create stream I0423 13:36:25.456124 6 log.go:172] (0xc000dfc580) (0xc00114a5a0) Stream added, broadcasting: 3 I0423 13:36:25.456923 6 log.go:172] (0xc000dfc580) Reply frame received for 3 I0423 13:36:25.456952 6 log.go:172] (0xc000dfc580) (0xc0000fe320) Create stream I0423 13:36:25.456961 6 log.go:172] (0xc000dfc580) (0xc0000fe320) Stream added, broadcasting: 5 I0423 13:36:25.458126 6 log.go:172] (0xc000dfc580) Reply frame received for 5 I0423 13:36:25.526648 6 log.go:172] (0xc000dfc580) Data frame received for 5 I0423 13:36:25.526688 6 log.go:172] (0xc0000fe320) (5) Data frame handling I0423 13:36:25.526714 6 log.go:172] (0xc000dfc580) Data frame received for 3 I0423 13:36:25.526730 6 log.go:172] (0xc00114a5a0) (3) Data frame handling I0423 13:36:25.526744 6 log.go:172] (0xc00114a5a0) (3) Data frame sent I0423 13:36:25.526755 6 log.go:172] (0xc000dfc580) Data frame received for 3 I0423 13:36:25.526768 6 log.go:172] (0xc00114a5a0) (3) Data frame handling I0423 13:36:25.528514 6 log.go:172] (0xc000dfc580) Data frame received for 1 I0423 13:36:25.528554 6 log.go:172] (0xc000984b40) (1) Data frame handling I0423 13:36:25.528594 6 log.go:172] (0xc000984b40) (1) Data frame sent I0423 13:36:25.528621 6 log.go:172] (0xc000dfc580) (0xc000984b40) Stream removed, broadcasting: 1 I0423 13:36:25.528657 6 log.go:172] (0xc000dfc580) Go away received I0423 13:36:25.528781 6 log.go:172] (0xc000dfc580) (0xc000984b40) Stream removed, broadcasting: 1 I0423 13:36:25.528825 6 log.go:172] (0xc000dfc580) (0xc00114a5a0) Stream removed, broadcasting: 3 I0423 13:36:25.528861 6 log.go:172] (0xc000dfc580) (0xc0000fe320) Stream removed, broadcasting: 5 Apr 23 13:36:25.528: INFO: Exec stderr: "" Apr 23 13:36:25.528: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4702 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 13:36:25.528: INFO: >>> kubeConfig: /root/.kube/config I0423 13:36:25.561768 6 log.go:172] (0xc000c7c6e0) (0xc001b35680) Create stream I0423 13:36:25.561804 6 log.go:172] (0xc000c7c6e0) (0xc001b35680) Stream added, broadcasting: 1 I0423 13:36:25.564132 6 log.go:172] (0xc000c7c6e0) Reply frame received for 1 I0423 13:36:25.564189 6 log.go:172] (0xc000c7c6e0) (0xc002eaa140) Create stream I0423 13:36:25.564209 6 log.go:172] (0xc000c7c6e0) (0xc002eaa140) Stream added, broadcasting: 3 I0423 13:36:25.565057 6 log.go:172] (0xc000c7c6e0) Reply frame received for 3 I0423 13:36:25.565088 6 log.go:172] (0xc000c7c6e0) (0xc00114aa00) Create stream I0423 13:36:25.565098 6 log.go:172] (0xc000c7c6e0) (0xc00114aa00) Stream added, broadcasting: 5 I0423 13:36:25.566451 6 log.go:172] (0xc000c7c6e0) Reply frame received for 5 I0423 13:36:25.623469 6 log.go:172] (0xc000c7c6e0) Data frame received for 3 I0423 13:36:25.623502 6 log.go:172] (0xc002eaa140) (3) Data frame handling I0423 13:36:25.623516 6 log.go:172] (0xc002eaa140) (3) Data frame sent I0423 13:36:25.623525 6 log.go:172] (0xc000c7c6e0) Data frame received for 3 I0423 13:36:25.623530 6 log.go:172] (0xc002eaa140) (3) Data frame handling I0423 13:36:25.623546 6 log.go:172] (0xc000c7c6e0) Data frame received for 5 I0423 13:36:25.623554 6 log.go:172] (0xc00114aa00) (5) Data frame handling I0423 13:36:25.625307 6 log.go:172] (0xc000c7c6e0) Data frame received for 1 I0423 13:36:25.625329 6 log.go:172] (0xc001b35680) (1) Data frame handling I0423 13:36:25.625346 6 log.go:172] (0xc001b35680) (1) Data frame sent I0423 13:36:25.625356 6 log.go:172] (0xc000c7c6e0) (0xc001b35680) Stream removed, broadcasting: 1 I0423 13:36:25.625449 6 log.go:172] (0xc000c7c6e0) (0xc001b35680) Stream removed, broadcasting: 1 I0423 13:36:25.625459 6 log.go:172] (0xc000c7c6e0) (0xc002eaa140) Stream removed, broadcasting: 3 I0423 13:36:25.625635 6 log.go:172] (0xc000c7c6e0) Go away received I0423 13:36:25.625804 6 log.go:172] (0xc000c7c6e0) (0xc00114aa00) Stream removed, broadcasting: 5 Apr 23 13:36:25.625: INFO: Exec stderr: "" Apr 23 13:36:25.625: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4702 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 13:36:25.625: INFO: >>> kubeConfig: /root/.kube/config I0423 13:36:25.701874 6 log.go:172] (0xc000db0f20) (0xc00114b540) Create stream I0423 13:36:25.701911 6 log.go:172] (0xc000db0f20) (0xc00114b540) Stream added, broadcasting: 1 I0423 13:36:25.704107 6 log.go:172] (0xc000db0f20) Reply frame received for 1 I0423 13:36:25.704145 6 log.go:172] (0xc000db0f20) (0xc001b35860) Create stream I0423 13:36:25.704160 6 log.go:172] (0xc000db0f20) (0xc001b35860) Stream added, broadcasting: 3 I0423 13:36:25.705233 6 log.go:172] (0xc000db0f20) Reply frame received for 3 I0423 13:36:25.705276 6 log.go:172] (0xc000db0f20) (0xc00114bae0) Create stream I0423 13:36:25.705293 6 log.go:172] (0xc000db0f20) (0xc00114bae0) Stream added, broadcasting: 5 I0423 13:36:25.706138 6 log.go:172] (0xc000db0f20) Reply frame received for 5 I0423 13:36:25.760678 6 log.go:172] (0xc000db0f20) Data frame received for 5 I0423 13:36:25.760745 6 log.go:172] (0xc00114bae0) (5) Data frame handling I0423 13:36:25.760788 6 log.go:172] (0xc000db0f20) Data frame received for 3 I0423 13:36:25.760816 6 log.go:172] (0xc001b35860) (3) Data frame handling I0423 13:36:25.760837 6 log.go:172] (0xc001b35860) (3) Data frame sent I0423 13:36:25.760856 6 log.go:172] (0xc000db0f20) Data frame received for 3 I0423 13:36:25.760866 6 log.go:172] (0xc001b35860) (3) Data frame handling I0423 13:36:25.762248 6 log.go:172] (0xc000db0f20) Data frame received for 1 I0423 13:36:25.762306 6 log.go:172] (0xc00114b540) (1) Data frame handling I0423 13:36:25.762343 6 log.go:172] (0xc00114b540) (1) Data frame sent I0423 13:36:25.762367 6 log.go:172] (0xc000db0f20) (0xc00114b540) Stream removed, broadcasting: 1 I0423 13:36:25.762385 6 log.go:172] (0xc000db0f20) Go away received I0423 13:36:25.762477 6 log.go:172] (0xc000db0f20) (0xc00114b540) Stream removed, broadcasting: 1 I0423 13:36:25.762508 6 log.go:172] (0xc000db0f20) (0xc001b35860) Stream removed, broadcasting: 3 I0423 13:36:25.762529 6 log.go:172] (0xc000db0f20) (0xc00114bae0) Stream removed, broadcasting: 5 Apr 23 13:36:25.762: INFO: Exec stderr: "" Apr 23 13:36:25.762: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4702 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 13:36:25.762: INFO: >>> kubeConfig: /root/.kube/config I0423 13:36:25.793328 6 log.go:172] (0xc0011908f0) (0xc0000fee60) Create stream I0423 13:36:25.793373 6 log.go:172] (0xc0011908f0) (0xc0000fee60) Stream added, broadcasting: 1 I0423 13:36:25.795687 6 log.go:172] (0xc0011908f0) Reply frame received for 1 I0423 13:36:25.795736 6 log.go:172] (0xc0011908f0) (0xc000984be0) Create stream I0423 13:36:25.795766 6 log.go:172] (0xc0011908f0) (0xc000984be0) Stream added, broadcasting: 3 I0423 13:36:25.797238 6 log.go:172] (0xc0011908f0) Reply frame received for 3 I0423 13:36:25.797298 6 log.go:172] (0xc0011908f0) (0xc000984d20) Create stream I0423 13:36:25.797313 6 log.go:172] (0xc0011908f0) (0xc000984d20) Stream added, broadcasting: 5 I0423 13:36:25.798468 6 log.go:172] (0xc0011908f0) Reply frame received for 5 I0423 13:36:25.853698 6 log.go:172] (0xc0011908f0) Data frame received for 3 I0423 13:36:25.853740 6 log.go:172] (0xc000984be0) (3) Data frame handling I0423 13:36:25.853753 6 log.go:172] (0xc000984be0) (3) Data frame sent I0423 13:36:25.853763 6 log.go:172] (0xc0011908f0) Data frame received for 3 I0423 13:36:25.853769 6 log.go:172] (0xc000984be0) (3) Data frame handling I0423 13:36:25.853791 6 log.go:172] (0xc0011908f0) Data frame received for 5 I0423 13:36:25.853808 6 log.go:172] (0xc000984d20) (5) Data frame handling I0423 13:36:25.855350 6 log.go:172] (0xc0011908f0) Data frame received for 1 I0423 13:36:25.855379 6 log.go:172] (0xc0000fee60) (1) Data frame handling I0423 13:36:25.855406 6 log.go:172] (0xc0000fee60) (1) Data frame sent I0423 13:36:25.855583 6 log.go:172] (0xc0011908f0) (0xc0000fee60) Stream removed, broadcasting: 1 I0423 13:36:25.855623 6 log.go:172] (0xc0011908f0) Go away received I0423 13:36:25.855709 6 log.go:172] (0xc0011908f0) (0xc0000fee60) Stream removed, broadcasting: 1 I0423 13:36:25.855753 6 log.go:172] (0xc0011908f0) (0xc000984be0) Stream removed, broadcasting: 3 I0423 13:36:25.855764 6 log.go:172] (0xc0011908f0) (0xc000984d20) Stream removed, broadcasting: 5 Apr 23 13:36:25.855: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 23 13:36:25.855: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4702 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 13:36:25.855: INFO: >>> kubeConfig: /root/.kube/config I0423 13:36:25.888521 6 log.go:172] (0xc000c7da20) (0xc001b35ea0) Create stream I0423 13:36:25.888544 6 log.go:172] (0xc000c7da20) (0xc001b35ea0) Stream added, broadcasting: 1 I0423 13:36:25.891319 6 log.go:172] (0xc000c7da20) Reply frame received for 1 I0423 13:36:25.891371 6 log.go:172] (0xc000c7da20) (0xc000f180a0) Create stream I0423 13:36:25.891383 6 log.go:172] (0xc000c7da20) (0xc000f180a0) Stream added, broadcasting: 3 I0423 13:36:25.892548 6 log.go:172] (0xc000c7da20) Reply frame received for 3 I0423 13:36:25.892595 6 log.go:172] (0xc000c7da20) (0xc002eaa1e0) Create stream I0423 13:36:25.892609 6 log.go:172] (0xc000c7da20) (0xc002eaa1e0) Stream added, broadcasting: 5 I0423 13:36:25.893868 6 log.go:172] (0xc000c7da20) Reply frame received for 5 I0423 13:36:25.961577 6 log.go:172] (0xc000c7da20) Data frame received for 5 I0423 13:36:25.961628 6 log.go:172] (0xc000c7da20) Data frame received for 3 I0423 13:36:25.961665 6 log.go:172] (0xc000f180a0) (3) Data frame handling I0423 13:36:25.961681 6 log.go:172] (0xc000f180a0) (3) Data frame sent I0423 13:36:25.961694 6 log.go:172] (0xc000c7da20) Data frame received for 3 I0423 13:36:25.961711 6 log.go:172] (0xc000f180a0) (3) Data frame handling I0423 13:36:25.961752 6 log.go:172] (0xc002eaa1e0) (5) Data frame handling I0423 13:36:25.963073 6 log.go:172] (0xc000c7da20) Data frame received for 1 I0423 13:36:25.963090 6 log.go:172] (0xc001b35ea0) (1) Data frame handling I0423 13:36:25.963112 6 log.go:172] (0xc001b35ea0) (1) Data frame sent I0423 13:36:25.963238 6 log.go:172] (0xc000c7da20) (0xc001b35ea0) Stream removed, broadcasting: 1 I0423 13:36:25.963321 6 log.go:172] (0xc000c7da20) Go away received I0423 13:36:25.963381 6 log.go:172] (0xc000c7da20) (0xc001b35ea0) Stream removed, broadcasting: 1 I0423 13:36:25.963429 6 log.go:172] (0xc000c7da20) (0xc000f180a0) Stream removed, broadcasting: 3 I0423 13:36:25.963450 6 log.go:172] (0xc000c7da20) (0xc002eaa1e0) Stream removed, broadcasting: 5 Apr 23 13:36:25.963: INFO: Exec stderr: "" Apr 23 13:36:25.963: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4702 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 13:36:25.963: INFO: >>> kubeConfig: /root/.kube/config I0423 13:36:25.999167 6 log.go:172] (0xc001b66420) (0xc0005aa280) Create stream I0423 13:36:25.999199 6 log.go:172] (0xc001b66420) (0xc0005aa280) Stream added, broadcasting: 1 I0423 13:36:26.001676 6 log.go:172] (0xc001b66420) Reply frame received for 1 I0423 13:36:26.001721 6 log.go:172] (0xc001b66420) (0xc0005aa320) Create stream I0423 13:36:26.001736 6 log.go:172] (0xc001b66420) (0xc0005aa320) Stream added, broadcasting: 3 I0423 13:36:26.002687 6 log.go:172] (0xc001b66420) Reply frame received for 3 I0423 13:36:26.002722 6 log.go:172] (0xc001b66420) (0xc002eaa280) Create stream I0423 13:36:26.002735 6 log.go:172] (0xc001b66420) (0xc002eaa280) Stream added, broadcasting: 5 I0423 13:36:26.003668 6 log.go:172] (0xc001b66420) Reply frame received for 5 I0423 13:36:26.051811 6 log.go:172] (0xc001b66420) Data frame received for 5 I0423 13:36:26.051912 6 log.go:172] (0xc002eaa280) (5) Data frame handling I0423 13:36:26.051974 6 log.go:172] (0xc001b66420) Data frame received for 3 I0423 13:36:26.052037 6 log.go:172] (0xc0005aa320) (3) Data frame handling I0423 13:36:26.052080 6 log.go:172] (0xc0005aa320) (3) Data frame sent I0423 13:36:26.052103 6 log.go:172] (0xc001b66420) Data frame received for 3 I0423 13:36:26.052117 6 log.go:172] (0xc0005aa320) (3) Data frame handling I0423 13:36:26.054212 6 log.go:172] (0xc001b66420) Data frame received for 1 I0423 13:36:26.054246 6 log.go:172] (0xc0005aa280) (1) Data frame handling I0423 13:36:26.054271 6 log.go:172] (0xc0005aa280) (1) Data frame sent I0423 13:36:26.054295 6 log.go:172] (0xc001b66420) (0xc0005aa280) Stream removed, broadcasting: 1 I0423 13:36:26.054412 6 log.go:172] (0xc001b66420) (0xc0005aa280) Stream removed, broadcasting: 1 I0423 13:36:26.054438 6 log.go:172] (0xc001b66420) (0xc0005aa320) Stream removed, broadcasting: 3 I0423 13:36:26.054779 6 log.go:172] (0xc001b66420) (0xc002eaa280) Stream removed, broadcasting: 5 Apr 23 13:36:26.054: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 23 13:36:26.054: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4702 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 13:36:26.054: INFO: >>> kubeConfig: /root/.kube/config I0423 13:36:26.057425 6 log.go:172] (0xc001b66420) Go away received I0423 13:36:26.090723 6 log.go:172] (0xc00061fce0) (0xc002eaa5a0) Create stream I0423 13:36:26.090761 6 log.go:172] (0xc00061fce0) (0xc002eaa5a0) Stream added, broadcasting: 1 I0423 13:36:26.093777 6 log.go:172] (0xc00061fce0) Reply frame received for 1 I0423 13:36:26.093825 6 log.go:172] (0xc00061fce0) (0xc0005aa3c0) Create stream I0423 13:36:26.093840 6 log.go:172] (0xc00061fce0) (0xc0005aa3c0) Stream added, broadcasting: 3 I0423 13:36:26.094912 6 log.go:172] (0xc00061fce0) Reply frame received for 3 I0423 13:36:26.094949 6 log.go:172] (0xc00061fce0) (0xc0005aa460) Create stream I0423 13:36:26.094966 6 log.go:172] (0xc00061fce0) (0xc0005aa460) Stream added, broadcasting: 5 I0423 13:36:26.095898 6 log.go:172] (0xc00061fce0) Reply frame received for 5 I0423 13:36:26.171598 6 log.go:172] (0xc00061fce0) Data frame received for 5 I0423 13:36:26.171661 6 log.go:172] (0xc0005aa460) (5) Data frame handling I0423 13:36:26.171704 6 log.go:172] (0xc00061fce0) Data frame received for 3 I0423 13:36:26.171736 6 log.go:172] (0xc0005aa3c0) (3) Data frame handling I0423 13:36:26.171768 6 log.go:172] (0xc0005aa3c0) (3) Data frame sent I0423 13:36:26.171785 6 log.go:172] (0xc00061fce0) Data frame received for 3 I0423 13:36:26.171801 6 log.go:172] (0xc0005aa3c0) (3) Data frame handling I0423 13:36:26.173980 6 log.go:172] (0xc00061fce0) Data frame received for 1 I0423 13:36:26.174004 6 log.go:172] (0xc002eaa5a0) (1) Data frame handling I0423 13:36:26.174031 6 log.go:172] (0xc002eaa5a0) (1) Data frame sent I0423 13:36:26.174056 6 log.go:172] (0xc00061fce0) (0xc002eaa5a0) Stream removed, broadcasting: 1 I0423 13:36:26.174146 6 log.go:172] (0xc00061fce0) Go away received I0423 13:36:26.174195 6 log.go:172] (0xc00061fce0) (0xc002eaa5a0) Stream removed, broadcasting: 1 I0423 13:36:26.174238 6 log.go:172] (0xc00061fce0) (0xc0005aa3c0) Stream removed, broadcasting: 3 I0423 13:36:26.174249 6 log.go:172] (0xc00061fce0) (0xc0005aa460) Stream removed, broadcasting: 5 Apr 23 13:36:26.174: INFO: Exec stderr: "" Apr 23 13:36:26.174: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4702 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 13:36:26.174: INFO: >>> kubeConfig: /root/.kube/config I0423 13:36:26.209097 6 log.go:172] (0xc001b67600) (0xc0005aaaa0) Create stream I0423 13:36:26.209261 6 log.go:172] (0xc001b67600) (0xc0005aaaa0) Stream added, broadcasting: 1 I0423 13:36:26.212545 6 log.go:172] (0xc001b67600) Reply frame received for 1 I0423 13:36:26.212577 6 log.go:172] (0xc001b67600) (0xc002eaa640) Create stream I0423 13:36:26.212584 6 log.go:172] (0xc001b67600) (0xc002eaa640) Stream added, broadcasting: 3 I0423 13:36:26.213847 6 log.go:172] (0xc001b67600) Reply frame received for 3 I0423 13:36:26.213866 6 log.go:172] (0xc001b67600) (0xc000984fa0) Create stream I0423 13:36:26.213877 6 log.go:172] (0xc001b67600) (0xc000984fa0) Stream added, broadcasting: 5 I0423 13:36:26.214976 6 log.go:172] (0xc001b67600) Reply frame received for 5 I0423 13:36:26.266828 6 log.go:172] (0xc001b67600) Data frame received for 3 I0423 13:36:26.266854 6 log.go:172] (0xc002eaa640) (3) Data frame handling I0423 13:36:26.266872 6 log.go:172] (0xc002eaa640) (3) Data frame sent I0423 13:36:26.266880 6 log.go:172] (0xc001b67600) Data frame received for 3 I0423 13:36:26.266892 6 log.go:172] (0xc002eaa640) (3) Data frame handling I0423 13:36:26.267077 6 log.go:172] (0xc001b67600) Data frame received for 5 I0423 13:36:26.267131 6 log.go:172] (0xc000984fa0) (5) Data frame handling I0423 13:36:26.268690 6 log.go:172] (0xc001b67600) Data frame received for 1 I0423 13:36:26.268726 6 log.go:172] (0xc0005aaaa0) (1) Data frame handling I0423 13:36:26.268758 6 log.go:172] (0xc0005aaaa0) (1) Data frame sent I0423 13:36:26.268834 6 log.go:172] (0xc001b67600) (0xc0005aaaa0) Stream removed, broadcasting: 1 I0423 13:36:26.269016 6 log.go:172] (0xc001b67600) (0xc0005aaaa0) Stream removed, broadcasting: 1 I0423 13:36:26.269083 6 log.go:172] (0xc001b67600) (0xc002eaa640) Stream removed, broadcasting: 3 I0423 13:36:26.269427 6 log.go:172] (0xc001b67600) Go away received I0423 13:36:26.269497 6 log.go:172] (0xc001b67600) (0xc000984fa0) Stream removed, broadcasting: 5 Apr 23 13:36:26.269: INFO: Exec stderr: "" Apr 23 13:36:26.269: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-4702 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 13:36:26.269: INFO: >>> kubeConfig: /root/.kube/config I0423 13:36:26.299864 6 log.go:172] (0xc001ebb080) (0xc002eaaa00) Create stream I0423 13:36:26.299901 6 log.go:172] (0xc001ebb080) (0xc002eaaa00) Stream added, broadcasting: 1 I0423 13:36:26.302791 6 log.go:172] (0xc001ebb080) Reply frame received for 1 I0423 13:36:26.302825 6 log.go:172] (0xc001ebb080) (0xc0005aab40) Create stream I0423 13:36:26.302836 6 log.go:172] (0xc001ebb080) (0xc0005aab40) Stream added, broadcasting: 3 I0423 13:36:26.303936 6 log.go:172] (0xc001ebb080) Reply frame received for 3 I0423 13:36:26.303981 6 log.go:172] (0xc001ebb080) (0xc0005aac80) Create stream I0423 13:36:26.304000 6 log.go:172] (0xc001ebb080) (0xc0005aac80) Stream added, broadcasting: 5 I0423 13:36:26.305276 6 log.go:172] (0xc001ebb080) Reply frame received for 5 I0423 13:36:26.379035 6 log.go:172] (0xc001ebb080) Data frame received for 5 I0423 13:36:26.379088 6 log.go:172] (0xc001ebb080) Data frame received for 3 I0423 13:36:26.379156 6 log.go:172] (0xc0005aab40) (3) Data frame handling I0423 13:36:26.379186 6 log.go:172] (0xc0005aab40) (3) Data frame sent I0423 13:36:26.379209 6 log.go:172] (0xc001ebb080) Data frame received for 3 I0423 13:36:26.379228 6 log.go:172] (0xc0005aac80) (5) Data frame handling I0423 13:36:26.379256 6 log.go:172] (0xc0005aab40) (3) Data frame handling I0423 13:36:26.380871 6 log.go:172] (0xc001ebb080) Data frame received for 1 I0423 13:36:26.380897 6 log.go:172] (0xc002eaaa00) (1) Data frame handling I0423 13:36:26.380919 6 log.go:172] (0xc002eaaa00) (1) Data frame sent I0423 13:36:26.380944 6 log.go:172] (0xc001ebb080) (0xc002eaaa00) Stream removed, broadcasting: 1 I0423 13:36:26.381069 6 log.go:172] (0xc001ebb080) Go away received I0423 13:36:26.381272 6 log.go:172] (0xc001ebb080) (0xc002eaaa00) Stream removed, broadcasting: 1 I0423 13:36:26.381323 6 log.go:172] (0xc001ebb080) (0xc0005aab40) Stream removed, broadcasting: 3 I0423 13:36:26.381337 6 log.go:172] (0xc001ebb080) (0xc0005aac80) Stream removed, broadcasting: 5 Apr 23 13:36:26.381: INFO: Exec stderr: "" Apr 23 13:36:26.381: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-4702 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 13:36:26.381: INFO: >>> kubeConfig: /root/.kube/config I0423 13:36:26.416168 6 log.go:172] (0xc0011911e0) (0xc0000ff040) Create stream I0423 13:36:26.416197 6 log.go:172] (0xc0011911e0) (0xc0000ff040) Stream added, broadcasting: 1 I0423 13:36:26.419243 6 log.go:172] (0xc0011911e0) Reply frame received for 1 I0423 13:36:26.419278 6 log.go:172] (0xc0011911e0) (0xc002eaab40) Create stream I0423 13:36:26.419288 6 log.go:172] (0xc0011911e0) (0xc002eaab40) Stream added, broadcasting: 3 I0423 13:36:26.420353 6 log.go:172] (0xc0011911e0) Reply frame received for 3 I0423 13:36:26.420408 6 log.go:172] (0xc0011911e0) (0xc000f18640) Create stream I0423 13:36:26.420427 6 log.go:172] (0xc0011911e0) (0xc000f18640) Stream added, broadcasting: 5 I0423 13:36:26.421724 6 log.go:172] (0xc0011911e0) Reply frame received for 5 I0423 13:36:26.476426 6 log.go:172] (0xc0011911e0) Data frame received for 3 I0423 13:36:26.476466 6 log.go:172] (0xc002eaab40) (3) Data frame handling I0423 13:36:26.476481 6 log.go:172] (0xc002eaab40) (3) Data frame sent I0423 13:36:26.476492 6 log.go:172] (0xc0011911e0) Data frame received for 3 I0423 13:36:26.476504 6 log.go:172] (0xc002eaab40) (3) Data frame handling I0423 13:36:26.476529 6 log.go:172] (0xc0011911e0) Data frame received for 5 I0423 13:36:26.476552 6 log.go:172] (0xc000f18640) (5) Data frame handling I0423 13:36:26.478194 6 log.go:172] (0xc0011911e0) Data frame received for 1 I0423 13:36:26.478224 6 log.go:172] (0xc0000ff040) (1) Data frame handling I0423 13:36:26.478236 6 log.go:172] (0xc0000ff040) (1) Data frame sent I0423 13:36:26.478256 6 log.go:172] (0xc0011911e0) (0xc0000ff040) Stream removed, broadcasting: 1 I0423 13:36:26.478270 6 log.go:172] (0xc0011911e0) Go away received I0423 13:36:26.478371 6 log.go:172] (0xc0011911e0) (0xc0000ff040) Stream removed, broadcasting: 1 I0423 13:36:26.478397 6 log.go:172] (0xc0011911e0) (0xc002eaab40) Stream removed, broadcasting: 3 I0423 13:36:26.478410 6 log.go:172] (0xc0011911e0) (0xc000f18640) Stream removed, broadcasting: 5 Apr 23 13:36:26.478: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:36:26.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-4702" for this suite. Apr 23 13:37:16.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:37:16.577: INFO: namespace e2e-kubelet-etc-hosts-4702 deletion completed in 50.094210037s • [SLOW TEST:61.276 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:37:16.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:37:16.676: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 23 13:37:21.681: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 23 13:37:21.681: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 23 13:37:26.076: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-4124,SelfLink:/apis/apps/v1/namespaces/deployment-4124/deployments/test-cleanup-deployment,UID:25916c6c-953e-4d1b-a9ca-2bd875aa3238,ResourceVersion:7004444,Generation:1,CreationTimestamp:2020-04-23 13:37:21 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 1,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-23 13:37:22 +0000 UTC 2020-04-23 13:37:22 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-23 13:37:25 +0000 UTC 2020-04-23 13:37:22 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-cleanup-deployment-55bbcbc84c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 23 13:37:26.080: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-4124,SelfLink:/apis/apps/v1/namespaces/deployment-4124/replicasets/test-cleanup-deployment-55bbcbc84c,UID:33ab29f7-7071-45b8-84cc-3685b63082fb,ResourceVersion:7004433,Generation:1,CreationTimestamp:2020-04-23 13:37:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 25916c6c-953e-4d1b-a9ca-2bd875aa3238 0xc00271f487 0xc00271f488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 23 13:37:26.084: INFO: Pod "test-cleanup-deployment-55bbcbc84c-k5lt7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-k5lt7,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-4124,SelfLink:/api/v1/namespaces/deployment-4124/pods/test-cleanup-deployment-55bbcbc84c-k5lt7,UID:302d0e0a-db71-47bb-b309-44d084f3d8f5,ResourceVersion:7004432,Generation:0,CreationTimestamp:2020-04-23 13:37:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c 33ab29f7-7071-45b8-84cc-3685b63082fb 0xc002dfbb67 0xc002dfbb68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cfsq8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cfsq8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-cfsq8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002dfbbe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002dfbc00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:37:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:37:24 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:37:24 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:37:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.144,StartTime:2020-04-23 13:37:22 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-23 13:37:24 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://a0a5b2dec3482712691cc1af7ee341716ce5a68020a510e86fd35325e2fb0dc0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:37:26.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4124" for this suite. Apr 23 13:37:32.186: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:37:32.260: INFO: namespace deployment-4124 deletion completed in 6.126982642s • [SLOW TEST:15.683 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:37:32.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:37:32.319: INFO: Creating deployment "test-recreate-deployment" Apr 23 13:37:32.326: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 23 13:37:32.356: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 23 13:37:34.364: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 23 13:37:34.367: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723245852, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723245852, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723245852, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723245852, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 13:37:36.372: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 23 13:37:36.378: INFO: Updating deployment test-recreate-deployment Apr 23 13:37:36.378: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 23 13:37:36.638: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-7772,SelfLink:/apis/apps/v1/namespaces/deployment-7772/deployments/test-recreate-deployment,UID:edf384e3-f2f2-491c-af69-f796a3123bb9,ResourceVersion:7004534,Generation:2,CreationTimestamp:2020-04-23 13:37:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-04-23 13:37:36 +0000 UTC 2020-04-23 13:37:36 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-23 13:37:36 +0000 UTC 2020-04-23 13:37:32 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Apr 23 13:37:36.644: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-7772,SelfLink:/apis/apps/v1/namespaces/deployment-7772/replicasets/test-recreate-deployment-5c8c9cc69d,UID:c3c1c5e0-19eb-46e2-85e7-71d023e44845,ResourceVersion:7004532,Generation:1,CreationTimestamp:2020-04-23 13:37:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment edf384e3-f2f2-491c-af69-f796a3123bb9 0xc002a1dc37 0xc002a1dc38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 23 13:37:36.644: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 23 13:37:36.644: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-7772,SelfLink:/apis/apps/v1/namespaces/deployment-7772/replicasets/test-recreate-deployment-6df85df6b9,UID:41a2368d-44e3-4c8e-96a8-76a25e3a7824,ResourceVersion:7004523,Generation:2,CreationTimestamp:2020-04-23 13:37:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment edf384e3-f2f2-491c-af69-f796a3123bb9 0xc002a1dd07 0xc002a1dd08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 23 13:37:36.666: INFO: Pod "test-recreate-deployment-5c8c9cc69d-kh5vr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-kh5vr,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-7772,SelfLink:/api/v1/namespaces/deployment-7772/pods/test-recreate-deployment-5c8c9cc69d-kh5vr,UID:a84e15cc-e6c8-494b-910b-050c33d6ef80,ResourceVersion:7004536,Generation:0,CreationTimestamp:2020-04-23 13:37:36 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d c3c1c5e0-19eb-46e2-85e7-71d023e44845 0xc00210c647 0xc00210c648}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnhlc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnhlc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnhlc true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00210c6c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00210c6e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:37:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:37:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:37:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:37:36 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-23 13:37:36 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:37:36.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7772" for this suite. Apr 23 13:37:42.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:37:42.902: INFO: namespace deployment-7772 deletion completed in 6.232443961s • [SLOW TEST:10.641 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:37:42.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-1789c36b-6def-48db-bd3a-927eb55aef5b STEP: Creating a pod to test consume configMaps Apr 23 13:37:43.003: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bee577c9-4b01-4898-800a-3a1a98f1644b" in namespace "projected-4574" to be "success or failure" Apr 23 13:37:43.006: INFO: Pod "pod-projected-configmaps-bee577c9-4b01-4898-800a-3a1a98f1644b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.932688ms Apr 23 13:37:45.010: INFO: Pod "pod-projected-configmaps-bee577c9-4b01-4898-800a-3a1a98f1644b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007090216s Apr 23 13:37:47.014: INFO: Pod "pod-projected-configmaps-bee577c9-4b01-4898-800a-3a1a98f1644b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011367629s STEP: Saw pod success Apr 23 13:37:47.015: INFO: Pod "pod-projected-configmaps-bee577c9-4b01-4898-800a-3a1a98f1644b" satisfied condition "success or failure" Apr 23 13:37:47.023: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-bee577c9-4b01-4898-800a-3a1a98f1644b container projected-configmap-volume-test: STEP: delete the pod Apr 23 13:37:47.043: INFO: Waiting for pod pod-projected-configmaps-bee577c9-4b01-4898-800a-3a1a98f1644b to disappear Apr 23 13:37:47.085: INFO: Pod pod-projected-configmaps-bee577c9-4b01-4898-800a-3a1a98f1644b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:37:47.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4574" for this suite. Apr 23 13:37:53.106: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:37:53.188: INFO: namespace projected-4574 deletion completed in 6.100015095s • [SLOW TEST:10.285 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:37:53.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 23 13:37:53.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8647' Apr 23 13:37:55.542: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 23 13:37:55.542: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Apr 23 13:37:55.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8647' Apr 23 13:37:55.678: INFO: stderr: "" Apr 23 13:37:55.678: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:37:55.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8647" for this suite. Apr 23 13:38:01.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:38:01.803: INFO: namespace kubectl-8647 deletion completed in 6.122442498s • [SLOW TEST:8.615 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:38:01.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Apr 23 13:38:01.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 23 13:38:01.947: INFO: stderr: "" Apr 23 13:38:01.947: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:38:01.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4073" for this suite. Apr 23 13:38:07.965: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:38:08.046: INFO: namespace kubectl-4073 deletion completed in 6.095087612s • [SLOW TEST:6.243 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:38:08.046: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 23 13:38:12.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-6b5b649e-36f6-45dd-9301-ddad7eda6f60 -c busybox-main-container --namespace=emptydir-9806 -- cat /usr/share/volumeshare/shareddata.txt' Apr 23 13:38:12.348: INFO: stderr: "I0423 13:38:12.260232 1417 log.go:172] (0xc0008f24d0) (0xc000768b40) Create stream\nI0423 13:38:12.260294 1417 log.go:172] (0xc0008f24d0) (0xc000768b40) Stream added, broadcasting: 1\nI0423 13:38:12.263386 1417 log.go:172] (0xc0008f24d0) Reply frame received for 1\nI0423 13:38:12.263446 1417 log.go:172] (0xc0008f24d0) (0xc000768000) Create stream\nI0423 13:38:12.263462 1417 log.go:172] (0xc0008f24d0) (0xc000768000) Stream added, broadcasting: 3\nI0423 13:38:12.264399 1417 log.go:172] (0xc0008f24d0) Reply frame received for 3\nI0423 13:38:12.264442 1417 log.go:172] (0xc0008f24d0) (0xc0003c00a0) Create stream\nI0423 13:38:12.264451 1417 log.go:172] (0xc0008f24d0) (0xc0003c00a0) Stream added, broadcasting: 5\nI0423 13:38:12.265519 1417 log.go:172] (0xc0008f24d0) Reply frame received for 5\nI0423 13:38:12.341878 1417 log.go:172] (0xc0008f24d0) Data frame received for 5\nI0423 13:38:12.341928 1417 log.go:172] (0xc0003c00a0) (5) Data frame handling\nI0423 13:38:12.341955 1417 log.go:172] (0xc0008f24d0) Data frame received for 3\nI0423 13:38:12.341970 1417 log.go:172] (0xc000768000) (3) Data frame handling\nI0423 13:38:12.341982 1417 log.go:172] (0xc000768000) (3) Data frame sent\nI0423 13:38:12.341992 1417 log.go:172] (0xc0008f24d0) Data frame received for 3\nI0423 13:38:12.342000 1417 log.go:172] (0xc000768000) (3) Data frame handling\nI0423 13:38:12.343770 1417 log.go:172] (0xc0008f24d0) Data frame received for 1\nI0423 13:38:12.343790 1417 log.go:172] (0xc000768b40) (1) Data frame handling\nI0423 13:38:12.343804 1417 log.go:172] (0xc000768b40) (1) Data frame sent\nI0423 13:38:12.343940 1417 log.go:172] (0xc0008f24d0) (0xc000768b40) Stream removed, broadcasting: 1\nI0423 13:38:12.343983 1417 log.go:172] (0xc0008f24d0) Go away received\nI0423 13:38:12.344291 1417 log.go:172] (0xc0008f24d0) (0xc000768b40) Stream removed, broadcasting: 1\nI0423 13:38:12.344311 1417 log.go:172] (0xc0008f24d0) (0xc000768000) Stream removed, broadcasting: 3\nI0423 13:38:12.344320 1417 log.go:172] (0xc0008f24d0) (0xc0003c00a0) Stream removed, broadcasting: 5\n" Apr 23 13:38:12.348: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:38:12.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9806" for this suite. Apr 23 13:38:18.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:38:18.443: INFO: namespace emptydir-9806 deletion completed in 6.090652452s • [SLOW TEST:10.397 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:38:18.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 23 13:38:18.501: INFO: Waiting up to 5m0s for pod "pod-21a3b486-f13d-48d0-bd56-5d91df3ffb9d" in namespace "emptydir-9879" to be "success or failure" Apr 23 13:38:18.518: INFO: Pod "pod-21a3b486-f13d-48d0-bd56-5d91df3ffb9d": Phase="Pending", Reason="", readiness=false. Elapsed: 16.594823ms Apr 23 13:38:20.522: INFO: Pod "pod-21a3b486-f13d-48d0-bd56-5d91df3ffb9d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02104411s Apr 23 13:38:22.527: INFO: Pod "pod-21a3b486-f13d-48d0-bd56-5d91df3ffb9d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025373423s STEP: Saw pod success Apr 23 13:38:22.527: INFO: Pod "pod-21a3b486-f13d-48d0-bd56-5d91df3ffb9d" satisfied condition "success or failure" Apr 23 13:38:22.530: INFO: Trying to get logs from node iruya-worker2 pod pod-21a3b486-f13d-48d0-bd56-5d91df3ffb9d container test-container: STEP: delete the pod Apr 23 13:38:22.578: INFO: Waiting for pod pod-21a3b486-f13d-48d0-bd56-5d91df3ffb9d to disappear Apr 23 13:38:22.588: INFO: Pod pod-21a3b486-f13d-48d0-bd56-5d91df3ffb9d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:38:22.588: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9879" for this suite. Apr 23 13:38:28.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:38:28.695: INFO: namespace emptydir-9879 deletion completed in 6.088315844s • [SLOW TEST:10.251 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:38:28.695: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0423 13:38:38.816665 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 23 13:38:38.816: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:38:38.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2380" for this suite. Apr 23 13:38:44.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:38:44.898: INFO: namespace gc-2380 deletion completed in 6.077441131s • [SLOW TEST:16.202 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:38:44.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-2dedca83-d66a-45e7-a012-c4a42fdff648 STEP: Creating a pod to test consume secrets Apr 23 13:38:44.958: INFO: Waiting up to 5m0s for pod "pod-secrets-b91b8d39-7b69-4c4b-8e6b-6ea0ed46dc75" in namespace "secrets-4883" to be "success or failure" Apr 23 13:38:44.962: INFO: Pod "pod-secrets-b91b8d39-7b69-4c4b-8e6b-6ea0ed46dc75": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089054ms Apr 23 13:38:46.966: INFO: Pod "pod-secrets-b91b8d39-7b69-4c4b-8e6b-6ea0ed46dc75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008045003s Apr 23 13:38:48.970: INFO: Pod "pod-secrets-b91b8d39-7b69-4c4b-8e6b-6ea0ed46dc75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012165923s STEP: Saw pod success Apr 23 13:38:48.970: INFO: Pod "pod-secrets-b91b8d39-7b69-4c4b-8e6b-6ea0ed46dc75" satisfied condition "success or failure" Apr 23 13:38:48.972: INFO: Trying to get logs from node iruya-worker pod pod-secrets-b91b8d39-7b69-4c4b-8e6b-6ea0ed46dc75 container secret-volume-test: STEP: delete the pod Apr 23 13:38:49.024: INFO: Waiting for pod pod-secrets-b91b8d39-7b69-4c4b-8e6b-6ea0ed46dc75 to disappear Apr 23 13:38:49.043: INFO: Pod pod-secrets-b91b8d39-7b69-4c4b-8e6b-6ea0ed46dc75 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:38:49.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4883" for this suite. Apr 23 13:38:55.063: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:38:55.139: INFO: namespace secrets-4883 deletion completed in 6.092429908s • [SLOW TEST:10.241 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:38:55.140: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 23 13:38:55.214: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-3367' Apr 23 13:38:55.323: INFO: stderr: "" Apr 23 13:38:55.323: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Apr 23 13:38:55.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-3367' Apr 23 13:38:59.176: INFO: stderr: "" Apr 23 13:38:59.176: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:38:59.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3367" for this suite. Apr 23 13:39:05.199: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:39:05.274: INFO: namespace kubectl-3367 deletion completed in 6.08571699s • [SLOW TEST:10.134 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:39:05.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:39:05.306: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:39:09.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1912" for this suite. Apr 23 13:39:55.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:39:55.619: INFO: namespace pods-1912 deletion completed in 46.117463754s • [SLOW TEST:50.345 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:39:55.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-79dcd885-1806-479b-8e56-4e47ab4db726 in namespace container-probe-3128 Apr 23 13:39:59.696: INFO: Started pod liveness-79dcd885-1806-479b-8e56-4e47ab4db726 in namespace container-probe-3128 STEP: checking the pod's current state and verifying that restartCount is present Apr 23 13:39:59.700: INFO: Initial restart count of pod liveness-79dcd885-1806-479b-8e56-4e47ab4db726 is 0 Apr 23 13:40:13.823: INFO: Restart count of pod container-probe-3128/liveness-79dcd885-1806-479b-8e56-4e47ab4db726 is now 1 (14.12359239s elapsed) Apr 23 13:40:33.878: INFO: Restart count of pod container-probe-3128/liveness-79dcd885-1806-479b-8e56-4e47ab4db726 is now 2 (34.17793472s elapsed) Apr 23 13:40:53.948: INFO: Restart count of pod container-probe-3128/liveness-79dcd885-1806-479b-8e56-4e47ab4db726 is now 3 (54.247801735s elapsed) Apr 23 13:41:13.986: INFO: Restart count of pod container-probe-3128/liveness-79dcd885-1806-479b-8e56-4e47ab4db726 is now 4 (1m14.285699206s elapsed) Apr 23 13:42:14.169: INFO: Restart count of pod container-probe-3128/liveness-79dcd885-1806-479b-8e56-4e47ab4db726 is now 5 (2m14.468719118s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:42:14.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3128" for this suite. Apr 23 13:42:20.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:42:20.337: INFO: namespace container-probe-3128 deletion completed in 6.10993733s • [SLOW TEST:144.717 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:42:20.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 23 13:42:20.403: INFO: Waiting up to 5m0s for pod "pod-332de7f3-f5fb-4ae3-aef4-71a80afb7969" in namespace "emptydir-2504" to be "success or failure" Apr 23 13:42:20.423: INFO: Pod "pod-332de7f3-f5fb-4ae3-aef4-71a80afb7969": Phase="Pending", Reason="", readiness=false. Elapsed: 20.118785ms Apr 23 13:42:22.428: INFO: Pod "pod-332de7f3-f5fb-4ae3-aef4-71a80afb7969": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024473796s Apr 23 13:42:24.478: INFO: Pod "pod-332de7f3-f5fb-4ae3-aef4-71a80afb7969": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074986421s STEP: Saw pod success Apr 23 13:42:24.479: INFO: Pod "pod-332de7f3-f5fb-4ae3-aef4-71a80afb7969" satisfied condition "success or failure" Apr 23 13:42:24.482: INFO: Trying to get logs from node iruya-worker pod pod-332de7f3-f5fb-4ae3-aef4-71a80afb7969 container test-container: STEP: delete the pod Apr 23 13:42:24.498: INFO: Waiting for pod pod-332de7f3-f5fb-4ae3-aef4-71a80afb7969 to disappear Apr 23 13:42:24.502: INFO: Pod pod-332de7f3-f5fb-4ae3-aef4-71a80afb7969 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:42:24.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2504" for this suite. Apr 23 13:42:30.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:42:30.605: INFO: namespace emptydir-2504 deletion completed in 6.09982419s • [SLOW TEST:10.268 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:42:30.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 23 13:42:30.702: INFO: Waiting up to 5m0s for pod "downward-api-fc754f34-7c5f-4c4a-83f8-bd8508a4ec51" in namespace "downward-api-8977" to be "success or failure" Apr 23 13:42:30.718: INFO: Pod "downward-api-fc754f34-7c5f-4c4a-83f8-bd8508a4ec51": Phase="Pending", Reason="", readiness=false. Elapsed: 15.796655ms Apr 23 13:42:32.731: INFO: Pod "downward-api-fc754f34-7c5f-4c4a-83f8-bd8508a4ec51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028374628s Apr 23 13:42:34.735: INFO: Pod "downward-api-fc754f34-7c5f-4c4a-83f8-bd8508a4ec51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0322865s STEP: Saw pod success Apr 23 13:42:34.735: INFO: Pod "downward-api-fc754f34-7c5f-4c4a-83f8-bd8508a4ec51" satisfied condition "success or failure" Apr 23 13:42:34.737: INFO: Trying to get logs from node iruya-worker2 pod downward-api-fc754f34-7c5f-4c4a-83f8-bd8508a4ec51 container dapi-container: STEP: delete the pod Apr 23 13:42:34.757: INFO: Waiting for pod downward-api-fc754f34-7c5f-4c4a-83f8-bd8508a4ec51 to disappear Apr 23 13:42:34.760: INFO: Pod downward-api-fc754f34-7c5f-4c4a-83f8-bd8508a4ec51 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:42:34.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8977" for this suite. Apr 23 13:42:40.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:42:40.889: INFO: namespace downward-api-8977 deletion completed in 6.125680314s • [SLOW TEST:10.284 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:42:40.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0423 13:42:53.306537 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 23 13:42:53.306: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:42:53.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6216" for this suite. Apr 23 13:43:01.322: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:43:01.480: INFO: namespace gc-6216 deletion completed in 8.169608994s • [SLOW TEST:20.590 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:43:01.480: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:43:01.522: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.534827ms) Apr 23 13:43:01.525: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.803502ms) Apr 23 13:43:01.527: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.387362ms) Apr 23 13:43:01.530: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.536173ms) Apr 23 13:43:01.533: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.0103ms) Apr 23 13:43:01.535: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.425984ms) Apr 23 13:43:01.557: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 21.376307ms) Apr 23 13:43:01.560: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.37143ms) Apr 23 13:43:01.564: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.484532ms) Apr 23 13:43:01.567: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.478037ms) Apr 23 13:43:01.570: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.158738ms) Apr 23 13:43:01.573: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.672226ms) Apr 23 13:43:01.576: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.336542ms) Apr 23 13:43:01.578: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.470924ms) Apr 23 13:43:01.580: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.397758ms) Apr 23 13:43:01.584: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.009337ms) Apr 23 13:43:01.586: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.609953ms) Apr 23 13:43:01.589: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.688192ms) Apr 23 13:43:01.592: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.669617ms) Apr 23 13:43:01.594: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.726199ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:43:01.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8354" for this suite. Apr 23 13:43:07.635: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:43:07.722: INFO: namespace proxy-8354 deletion completed in 6.124304093s • [SLOW TEST:6.241 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:43:07.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 23 13:43:15.874: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 23 13:43:15.882: INFO: Pod pod-with-poststart-exec-hook still exists Apr 23 13:43:17.882: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 23 13:43:17.886: INFO: Pod pod-with-poststart-exec-hook still exists Apr 23 13:43:19.882: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 23 13:43:19.886: INFO: Pod pod-with-poststart-exec-hook still exists Apr 23 13:43:21.882: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 23 13:43:21.886: INFO: Pod pod-with-poststart-exec-hook still exists Apr 23 13:43:23.882: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 23 13:43:23.887: INFO: Pod pod-with-poststart-exec-hook still exists Apr 23 13:43:25.882: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 23 13:43:25.886: INFO: Pod pod-with-poststart-exec-hook still exists Apr 23 13:43:27.882: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 23 13:43:27.886: INFO: Pod pod-with-poststart-exec-hook still exists Apr 23 13:43:29.882: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 23 13:43:29.886: INFO: Pod pod-with-poststart-exec-hook still exists Apr 23 13:43:31.882: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 23 13:43:31.886: INFO: Pod pod-with-poststart-exec-hook still exists Apr 23 13:43:33.882: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 23 13:43:33.900: INFO: Pod pod-with-poststart-exec-hook still exists Apr 23 13:43:35.882: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 23 13:43:35.886: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:43:35.886: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8691" for this suite. Apr 23 13:43:57.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:43:58.007: INFO: namespace container-lifecycle-hook-8691 deletion completed in 22.115868641s • [SLOW TEST:50.285 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:43:58.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-ed393914-554b-4c31-aa31-68cad1a1f022 STEP: Creating a pod to test consume secrets Apr 23 13:43:58.126: INFO: Waiting up to 5m0s for pod "pod-secrets-2708b9c7-d3a6-4ece-bc84-eb1bf1d9baa9" in namespace "secrets-2245" to be "success or failure" Apr 23 13:43:58.128: INFO: Pod "pod-secrets-2708b9c7-d3a6-4ece-bc84-eb1bf1d9baa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.415874ms Apr 23 13:44:00.142: INFO: Pod "pod-secrets-2708b9c7-d3a6-4ece-bc84-eb1bf1d9baa9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016106014s Apr 23 13:44:02.146: INFO: Pod "pod-secrets-2708b9c7-d3a6-4ece-bc84-eb1bf1d9baa9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019928508s STEP: Saw pod success Apr 23 13:44:02.146: INFO: Pod "pod-secrets-2708b9c7-d3a6-4ece-bc84-eb1bf1d9baa9" satisfied condition "success or failure" Apr 23 13:44:02.148: INFO: Trying to get logs from node iruya-worker pod pod-secrets-2708b9c7-d3a6-4ece-bc84-eb1bf1d9baa9 container secret-volume-test: STEP: delete the pod Apr 23 13:44:02.201: INFO: Waiting for pod pod-secrets-2708b9c7-d3a6-4ece-bc84-eb1bf1d9baa9 to disappear Apr 23 13:44:02.206: INFO: Pod pod-secrets-2708b9c7-d3a6-4ece-bc84-eb1bf1d9baa9 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:44:02.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2245" for this suite. Apr 23 13:44:08.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:44:08.307: INFO: namespace secrets-2245 deletion completed in 6.097414756s • [SLOW TEST:10.299 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:44:08.308: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-c3d3b9a9-72ca-4fdc-b9e5-eeb9350bb526 STEP: Creating a pod to test consume configMaps Apr 23 13:44:08.394: INFO: Waiting up to 5m0s for pod "pod-configmaps-d13d60a8-e4b4-4f08-93b1-d8c02f36bc78" in namespace "configmap-5261" to be "success or failure" Apr 23 13:44:08.397: INFO: Pod "pod-configmaps-d13d60a8-e4b4-4f08-93b1-d8c02f36bc78": Phase="Pending", Reason="", readiness=false. Elapsed: 3.253948ms Apr 23 13:44:10.401: INFO: Pod "pod-configmaps-d13d60a8-e4b4-4f08-93b1-d8c02f36bc78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007150027s Apr 23 13:44:12.406: INFO: Pod "pod-configmaps-d13d60a8-e4b4-4f08-93b1-d8c02f36bc78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011705285s STEP: Saw pod success Apr 23 13:44:12.406: INFO: Pod "pod-configmaps-d13d60a8-e4b4-4f08-93b1-d8c02f36bc78" satisfied condition "success or failure" Apr 23 13:44:12.408: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-d13d60a8-e4b4-4f08-93b1-d8c02f36bc78 container configmap-volume-test: STEP: delete the pod Apr 23 13:44:12.429: INFO: Waiting for pod pod-configmaps-d13d60a8-e4b4-4f08-93b1-d8c02f36bc78 to disappear Apr 23 13:44:12.433: INFO: Pod pod-configmaps-d13d60a8-e4b4-4f08-93b1-d8c02f36bc78 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:44:12.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5261" for this suite. Apr 23 13:44:18.459: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:44:18.551: INFO: namespace configmap-5261 deletion completed in 6.113891807s • [SLOW TEST:10.244 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:44:18.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 13:44:18.610: INFO: Waiting up to 5m0s for pod "downwardapi-volume-700877f1-8aea-408d-b9e7-2eb5dfaaa12f" in namespace "downward-api-2110" to be "success or failure" Apr 23 13:44:18.614: INFO: Pod "downwardapi-volume-700877f1-8aea-408d-b9e7-2eb5dfaaa12f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081361ms Apr 23 13:44:20.632: INFO: Pod "downwardapi-volume-700877f1-8aea-408d-b9e7-2eb5dfaaa12f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022482899s Apr 23 13:44:22.636: INFO: Pod "downwardapi-volume-700877f1-8aea-408d-b9e7-2eb5dfaaa12f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026167412s STEP: Saw pod success Apr 23 13:44:22.636: INFO: Pod "downwardapi-volume-700877f1-8aea-408d-b9e7-2eb5dfaaa12f" satisfied condition "success or failure" Apr 23 13:44:22.639: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-700877f1-8aea-408d-b9e7-2eb5dfaaa12f container client-container: STEP: delete the pod Apr 23 13:44:22.657: INFO: Waiting for pod downwardapi-volume-700877f1-8aea-408d-b9e7-2eb5dfaaa12f to disappear Apr 23 13:44:22.662: INFO: Pod downwardapi-volume-700877f1-8aea-408d-b9e7-2eb5dfaaa12f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:44:22.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2110" for this suite. Apr 23 13:44:28.773: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:44:28.897: INFO: namespace downward-api-2110 deletion completed in 6.232133755s • [SLOW TEST:10.345 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:44:28.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-55a04832-a810-43bb-b3ef-86bd38d0b077 STEP: Creating a pod to test consume configMaps Apr 23 13:44:28.970: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d670a4f9-84a6-4d15-a5ed-907e04ca01db" in namespace "projected-9493" to be "success or failure" Apr 23 13:44:28.986: INFO: Pod "pod-projected-configmaps-d670a4f9-84a6-4d15-a5ed-907e04ca01db": Phase="Pending", Reason="", readiness=false. Elapsed: 15.997672ms Apr 23 13:44:30.990: INFO: Pod "pod-projected-configmaps-d670a4f9-84a6-4d15-a5ed-907e04ca01db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020110581s Apr 23 13:44:32.994: INFO: Pod "pod-projected-configmaps-d670a4f9-84a6-4d15-a5ed-907e04ca01db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024920056s STEP: Saw pod success Apr 23 13:44:32.995: INFO: Pod "pod-projected-configmaps-d670a4f9-84a6-4d15-a5ed-907e04ca01db" satisfied condition "success or failure" Apr 23 13:44:32.998: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-d670a4f9-84a6-4d15-a5ed-907e04ca01db container projected-configmap-volume-test: STEP: delete the pod Apr 23 13:44:33.029: INFO: Waiting for pod pod-projected-configmaps-d670a4f9-84a6-4d15-a5ed-907e04ca01db to disappear Apr 23 13:44:33.039: INFO: Pod pod-projected-configmaps-d670a4f9-84a6-4d15-a5ed-907e04ca01db no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:44:33.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9493" for this suite. Apr 23 13:44:39.055: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:44:39.126: INFO: namespace projected-9493 deletion completed in 6.084046343s • [SLOW TEST:10.228 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:44:39.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 23 13:44:43.200: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-62f3f62b-d73e-44f8-84fb-7ab73f1f8215,GenerateName:,Namespace:events-122,SelfLink:/api/v1/namespaces/events-122/pods/send-events-62f3f62b-d73e-44f8-84fb-7ab73f1f8215,UID:aa1d84f3-0af9-4530-84ed-283d2239757e,ResourceVersion:7006056,Generation:0,CreationTimestamp:2020-04-23 13:44:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 174323234,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-9dcvn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-9dcvn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-9dcvn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003156850} {node.kubernetes.io/unreachable Exists NoExecute 0xc003156870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:44:39 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:44:41 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:44:41 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 13:44:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.111,StartTime:2020-04-23 13:44:39 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-04-23 13:44:41 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://6fdeff1c85df462f514605ff04b8a44a7057338d94918c842a850c5f0d7fce53}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 23 13:44:45.205: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 23 13:44:47.211: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:44:47.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-122" for this suite. Apr 23 13:45:25.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:45:25.392: INFO: namespace events-122 deletion completed in 38.139953836s • [SLOW TEST:46.266 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:45:25.393: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 23 13:45:25.532: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:25.537: INFO: Number of nodes with available pods: 0 Apr 23 13:45:25.537: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:45:26.543: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:26.546: INFO: Number of nodes with available pods: 0 Apr 23 13:45:26.546: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:45:27.541: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:27.543: INFO: Number of nodes with available pods: 0 Apr 23 13:45:27.543: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:45:28.542: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:28.545: INFO: Number of nodes with available pods: 0 Apr 23 13:45:28.545: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:45:29.542: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:29.545: INFO: Number of nodes with available pods: 1 Apr 23 13:45:29.546: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:45:30.542: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:30.546: INFO: Number of nodes with available pods: 2 Apr 23 13:45:30.546: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 23 13:45:30.572: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:30.575: INFO: Number of nodes with available pods: 1 Apr 23 13:45:30.575: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:31.580: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:31.584: INFO: Number of nodes with available pods: 1 Apr 23 13:45:31.584: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:32.580: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:32.584: INFO: Number of nodes with available pods: 1 Apr 23 13:45:32.584: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:33.580: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:33.584: INFO: Number of nodes with available pods: 1 Apr 23 13:45:33.584: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:34.580: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:34.584: INFO: Number of nodes with available pods: 1 Apr 23 13:45:34.584: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:35.581: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:35.584: INFO: Number of nodes with available pods: 1 Apr 23 13:45:35.584: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:36.581: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:36.585: INFO: Number of nodes with available pods: 1 Apr 23 13:45:36.585: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:37.580: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:37.584: INFO: Number of nodes with available pods: 1 Apr 23 13:45:37.584: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:38.580: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:38.584: INFO: Number of nodes with available pods: 1 Apr 23 13:45:38.584: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:39.581: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:39.584: INFO: Number of nodes with available pods: 1 Apr 23 13:45:39.584: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:40.581: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:40.585: INFO: Number of nodes with available pods: 1 Apr 23 13:45:40.585: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:41.580: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:41.584: INFO: Number of nodes with available pods: 1 Apr 23 13:45:41.584: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:42.580: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:42.584: INFO: Number of nodes with available pods: 1 Apr 23 13:45:42.584: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:43.580: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:43.584: INFO: Number of nodes with available pods: 1 Apr 23 13:45:43.584: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:44.580: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:44.584: INFO: Number of nodes with available pods: 1 Apr 23 13:45:44.584: INFO: Node iruya-worker2 is running more than one daemon pod Apr 23 13:45:45.580: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 13:45:45.584: INFO: Number of nodes with available pods: 2 Apr 23 13:45:45.584: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6391, will wait for the garbage collector to delete the pods Apr 23 13:45:45.647: INFO: Deleting DaemonSet.extensions daemon-set took: 7.079764ms Apr 23 13:45:45.948: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.260105ms Apr 23 13:45:51.952: INFO: Number of nodes with available pods: 0 Apr 23 13:45:51.952: INFO: Number of running nodes: 0, number of available pods: 0 Apr 23 13:45:51.954: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6391/daemonsets","resourceVersion":"7006263"},"items":null} Apr 23 13:45:51.956: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6391/pods","resourceVersion":"7006263"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:45:51.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6391" for this suite. Apr 23 13:45:57.996: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:45:58.073: INFO: namespace daemonsets-6391 deletion completed in 6.107149472s • [SLOW TEST:32.681 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:45:58.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:46:02.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2774" for this suite. Apr 23 13:46:08.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:46:08.257: INFO: namespace kubelet-test-2774 deletion completed in 6.086577052s • [SLOW TEST:10.182 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:46:08.257: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Apr 23 13:46:08.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9194 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 23 13:46:15.833: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0423 13:46:15.763087 1479 log.go:172] (0xc00093c160) (0xc00092c960) Create stream\nI0423 13:46:15.763159 1479 log.go:172] (0xc00093c160) (0xc00092c960) Stream added, broadcasting: 1\nI0423 13:46:15.766790 1479 log.go:172] (0xc00093c160) Reply frame received for 1\nI0423 13:46:15.766828 1479 log.go:172] (0xc00093c160) (0xc0005c4140) Create stream\nI0423 13:46:15.766840 1479 log.go:172] (0xc00093c160) (0xc0005c4140) Stream added, broadcasting: 3\nI0423 13:46:15.767881 1479 log.go:172] (0xc00093c160) Reply frame received for 3\nI0423 13:46:15.767941 1479 log.go:172] (0xc00093c160) (0xc00092c000) Create stream\nI0423 13:46:15.767961 1479 log.go:172] (0xc00093c160) (0xc00092c000) Stream added, broadcasting: 5\nI0423 13:46:15.768898 1479 log.go:172] (0xc00093c160) Reply frame received for 5\nI0423 13:46:15.768936 1479 log.go:172] (0xc00093c160) (0xc00092c0a0) Create stream\nI0423 13:46:15.768954 1479 log.go:172] (0xc00093c160) (0xc00092c0a0) Stream added, broadcasting: 7\nI0423 13:46:15.770093 1479 log.go:172] (0xc00093c160) Reply frame received for 7\nI0423 13:46:15.770224 1479 log.go:172] (0xc0005c4140) (3) Writing data frame\nI0423 13:46:15.770324 1479 log.go:172] (0xc0005c4140) (3) Writing data frame\nI0423 13:46:15.771120 1479 log.go:172] (0xc00093c160) Data frame received for 5\nI0423 13:46:15.771139 1479 log.go:172] (0xc00092c000) (5) Data frame handling\nI0423 13:46:15.771158 1479 log.go:172] (0xc00092c000) (5) Data frame sent\nI0423 13:46:15.771760 1479 log.go:172] (0xc00093c160) Data frame received for 5\nI0423 13:46:15.771780 1479 log.go:172] (0xc00092c000) (5) Data frame handling\nI0423 13:46:15.771792 1479 log.go:172] (0xc00092c000) (5) Data frame sent\nI0423 13:46:15.813871 1479 log.go:172] (0xc00093c160) Data frame received for 7\nI0423 13:46:15.813905 1479 log.go:172] (0xc00092c0a0) (7) Data frame handling\nI0423 13:46:15.813930 1479 log.go:172] (0xc00093c160) Data frame received for 5\nI0423 13:46:15.813937 1479 log.go:172] (0xc00092c000) (5) Data frame handling\nI0423 13:46:15.814424 1479 log.go:172] (0xc00093c160) Data frame received for 1\nI0423 13:46:15.814447 1479 log.go:172] (0xc00092c960) (1) Data frame handling\nI0423 13:46:15.814458 1479 log.go:172] (0xc00092c960) (1) Data frame sent\nI0423 13:46:15.814470 1479 log.go:172] (0xc00093c160) (0xc00092c960) Stream removed, broadcasting: 1\nI0423 13:46:15.814585 1479 log.go:172] (0xc00093c160) (0xc0005c4140) Stream removed, broadcasting: 3\nI0423 13:46:15.814637 1479 log.go:172] (0xc00093c160) (0xc00092c960) Stream removed, broadcasting: 1\nI0423 13:46:15.814663 1479 log.go:172] (0xc00093c160) (0xc0005c4140) Stream removed, broadcasting: 3\nI0423 13:46:15.814672 1479 log.go:172] (0xc00093c160) (0xc00092c000) Stream removed, broadcasting: 5\nI0423 13:46:15.814682 1479 log.go:172] (0xc00093c160) (0xc00092c0a0) Stream removed, broadcasting: 7\nI0423 13:46:15.814736 1479 log.go:172] (0xc00093c160) Go away received\n" Apr 23 13:46:15.833: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:46:17.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9194" for this suite. Apr 23 13:46:23.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:46:23.928: INFO: namespace kubectl-9194 deletion completed in 6.084499139s • [SLOW TEST:15.670 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:46:23.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-508 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-508 STEP: Creating statefulset with conflicting port in namespace statefulset-508 STEP: Waiting until pod test-pod will start running in namespace statefulset-508 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-508 Apr 23 13:46:31.034: INFO: Observed stateful pod in namespace: statefulset-508, name: ss-0, uid: 3feb7ead-4eb4-4b04-b208-de3e4270bc03, status phase: Pending. Waiting for statefulset controller to delete. Apr 23 13:46:32.147: INFO: Observed stateful pod in namespace: statefulset-508, name: ss-0, uid: 3feb7ead-4eb4-4b04-b208-de3e4270bc03, status phase: Failed. Waiting for statefulset controller to delete. Apr 23 13:46:32.156: INFO: Observed stateful pod in namespace: statefulset-508, name: ss-0, uid: 3feb7ead-4eb4-4b04-b208-de3e4270bc03, status phase: Failed. Waiting for statefulset controller to delete. Apr 23 13:46:32.185: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-508 STEP: Removing pod with conflicting port in namespace statefulset-508 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-508 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 23 13:46:43.071: INFO: Deleting all statefulset in ns statefulset-508 Apr 23 13:46:43.075: INFO: Scaling statefulset ss to 0 Apr 23 13:46:53.478: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 13:46:53.481: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:46:53.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-508" for this suite. Apr 23 13:47:01.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:47:01.600: INFO: namespace statefulset-508 deletion completed in 8.093944477s • [SLOW TEST:37.672 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:47:01.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-c45827e9-a005-4e58-8f3b-352a96a58c12 STEP: Creating a pod to test consume secrets Apr 23 13:47:01.674: INFO: Waiting up to 5m0s for pod "pod-secrets-4eeeae14-df03-4d2f-a87b-655ed637932e" in namespace "secrets-6772" to be "success or failure" Apr 23 13:47:01.677: INFO: Pod "pod-secrets-4eeeae14-df03-4d2f-a87b-655ed637932e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.208198ms Apr 23 13:47:03.682: INFO: Pod "pod-secrets-4eeeae14-df03-4d2f-a87b-655ed637932e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007718317s Apr 23 13:47:05.686: INFO: Pod "pod-secrets-4eeeae14-df03-4d2f-a87b-655ed637932e": Phase="Running", Reason="", readiness=true. Elapsed: 4.012009497s Apr 23 13:47:07.690: INFO: Pod "pod-secrets-4eeeae14-df03-4d2f-a87b-655ed637932e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016631534s STEP: Saw pod success Apr 23 13:47:07.691: INFO: Pod "pod-secrets-4eeeae14-df03-4d2f-a87b-655ed637932e" satisfied condition "success or failure" Apr 23 13:47:07.694: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-4eeeae14-df03-4d2f-a87b-655ed637932e container secret-volume-test: STEP: delete the pod Apr 23 13:47:07.724: INFO: Waiting for pod pod-secrets-4eeeae14-df03-4d2f-a87b-655ed637932e to disappear Apr 23 13:47:07.726: INFO: Pod pod-secrets-4eeeae14-df03-4d2f-a87b-655ed637932e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:47:07.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6772" for this suite. Apr 23 13:47:13.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:47:13.819: INFO: namespace secrets-6772 deletion completed in 6.090157909s • [SLOW TEST:12.218 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:47:13.819: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Apr 23 13:47:17.920: INFO: Pod pod-hostip-497db2e8-87c2-4d08-b7f0-6603efefb4ca has hostIP: 172.17.0.6 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:47:17.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7908" for this suite. Apr 23 13:47:39.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:47:40.031: INFO: namespace pods-7908 deletion completed in 22.106866092s • [SLOW TEST:26.212 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:47:40.031: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-8630 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 23 13:47:40.082: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 23 13:47:58.209: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.166:8080/dial?request=hostName&protocol=udp&host=10.244.1.165&port=8081&tries=1'] Namespace:pod-network-test-8630 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 13:47:58.209: INFO: >>> kubeConfig: /root/.kube/config I0423 13:47:58.243849 6 log.go:172] (0xc000037810) (0xc001f485a0) Create stream I0423 13:47:58.243880 6 log.go:172] (0xc000037810) (0xc001f485a0) Stream added, broadcasting: 1 I0423 13:47:58.246841 6 log.go:172] (0xc000037810) Reply frame received for 1 I0423 13:47:58.246903 6 log.go:172] (0xc000037810) (0xc002d9c000) Create stream I0423 13:47:58.246929 6 log.go:172] (0xc000037810) (0xc002d9c000) Stream added, broadcasting: 3 I0423 13:47:58.248171 6 log.go:172] (0xc000037810) Reply frame received for 3 I0423 13:47:58.248208 6 log.go:172] (0xc000037810) (0xc001f48640) Create stream I0423 13:47:58.248239 6 log.go:172] (0xc000037810) (0xc001f48640) Stream added, broadcasting: 5 I0423 13:47:58.249375 6 log.go:172] (0xc000037810) Reply frame received for 5 I0423 13:47:58.321392 6 log.go:172] (0xc000037810) Data frame received for 3 I0423 13:47:58.321421 6 log.go:172] (0xc002d9c000) (3) Data frame handling I0423 13:47:58.321437 6 log.go:172] (0xc002d9c000) (3) Data frame sent I0423 13:47:58.321713 6 log.go:172] (0xc000037810) Data frame received for 3 I0423 13:47:58.321732 6 log.go:172] (0xc002d9c000) (3) Data frame handling I0423 13:47:58.321834 6 log.go:172] (0xc000037810) Data frame received for 5 I0423 13:47:58.321852 6 log.go:172] (0xc001f48640) (5) Data frame handling I0423 13:47:58.323196 6 log.go:172] (0xc000037810) Data frame received for 1 I0423 13:47:58.323220 6 log.go:172] (0xc001f485a0) (1) Data frame handling I0423 13:47:58.323235 6 log.go:172] (0xc001f485a0) (1) Data frame sent I0423 13:47:58.323249 6 log.go:172] (0xc000037810) (0xc001f485a0) Stream removed, broadcasting: 1 I0423 13:47:58.323264 6 log.go:172] (0xc000037810) Go away received I0423 13:47:58.323390 6 log.go:172] (0xc000037810) (0xc001f485a0) Stream removed, broadcasting: 1 I0423 13:47:58.323415 6 log.go:172] (0xc000037810) (0xc002d9c000) Stream removed, broadcasting: 3 I0423 13:47:58.323423 6 log.go:172] (0xc000037810) (0xc001f48640) Stream removed, broadcasting: 5 Apr 23 13:47:58.323: INFO: Waiting for endpoints: map[] Apr 23 13:47:58.326: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.166:8080/dial?request=hostName&protocol=udp&host=10.244.2.118&port=8081&tries=1'] Namespace:pod-network-test-8630 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 13:47:58.326: INFO: >>> kubeConfig: /root/.kube/config I0423 13:47:58.352043 6 log.go:172] (0xc00061f080) (0xc001496780) Create stream I0423 13:47:58.352080 6 log.go:172] (0xc00061f080) (0xc001496780) Stream added, broadcasting: 1 I0423 13:47:58.356938 6 log.go:172] (0xc00061f080) Reply frame received for 1 I0423 13:47:58.357014 6 log.go:172] (0xc00061f080) (0xc001c206e0) Create stream I0423 13:47:58.357037 6 log.go:172] (0xc00061f080) (0xc001c206e0) Stream added, broadcasting: 3 I0423 13:47:58.363080 6 log.go:172] (0xc00061f080) Reply frame received for 3 I0423 13:47:58.363213 6 log.go:172] (0xc00061f080) (0xc002f3c000) Create stream I0423 13:47:58.363300 6 log.go:172] (0xc00061f080) (0xc002f3c000) Stream added, broadcasting: 5 I0423 13:47:58.365473 6 log.go:172] (0xc00061f080) Reply frame received for 5 I0423 13:47:58.421826 6 log.go:172] (0xc00061f080) Data frame received for 3 I0423 13:47:58.421854 6 log.go:172] (0xc001c206e0) (3) Data frame handling I0423 13:47:58.421870 6 log.go:172] (0xc001c206e0) (3) Data frame sent I0423 13:47:58.422116 6 log.go:172] (0xc00061f080) Data frame received for 5 I0423 13:47:58.422138 6 log.go:172] (0xc002f3c000) (5) Data frame handling I0423 13:47:58.422177 6 log.go:172] (0xc00061f080) Data frame received for 3 I0423 13:47:58.422201 6 log.go:172] (0xc001c206e0) (3) Data frame handling I0423 13:47:58.423747 6 log.go:172] (0xc00061f080) Data frame received for 1 I0423 13:47:58.423758 6 log.go:172] (0xc001496780) (1) Data frame handling I0423 13:47:58.423767 6 log.go:172] (0xc001496780) (1) Data frame sent I0423 13:47:58.423783 6 log.go:172] (0xc00061f080) (0xc001496780) Stream removed, broadcasting: 1 I0423 13:47:58.423798 6 log.go:172] (0xc00061f080) Go away received I0423 13:47:58.423896 6 log.go:172] (0xc00061f080) (0xc001496780) Stream removed, broadcasting: 1 I0423 13:47:58.423923 6 log.go:172] (0xc00061f080) (0xc001c206e0) Stream removed, broadcasting: 3 I0423 13:47:58.423939 6 log.go:172] (0xc00061f080) (0xc002f3c000) Stream removed, broadcasting: 5 Apr 23 13:47:58.423: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:47:58.424: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-8630" for this suite. Apr 23 13:48:20.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:48:20.533: INFO: namespace pod-network-test-8630 deletion completed in 22.105463209s • [SLOW TEST:40.502 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:48:20.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:48:46.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-9087" for this suite. Apr 23 13:48:52.820: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:48:52.901: INFO: namespace namespaces-9087 deletion completed in 6.093107692s STEP: Destroying namespace "nsdeletetest-2242" for this suite. Apr 23 13:48:52.904: INFO: Namespace nsdeletetest-2242 was already deleted STEP: Destroying namespace "nsdeletetest-1785" for this suite. Apr 23 13:48:58.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:48:58.997: INFO: namespace nsdeletetest-1785 deletion completed in 6.093455284s • [SLOW TEST:38.464 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:48:58.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-67dcc957-4d69-4679-b590-a03fc7c37b86 STEP: Creating a pod to test consume secrets Apr 23 13:48:59.089: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-49d8490e-1aac-4c0e-8780-c06b5c4f1538" in namespace "projected-3877" to be "success or failure" Apr 23 13:48:59.092: INFO: Pod "pod-projected-secrets-49d8490e-1aac-4c0e-8780-c06b5c4f1538": Phase="Pending", Reason="", readiness=false. Elapsed: 3.192322ms Apr 23 13:49:01.097: INFO: Pod "pod-projected-secrets-49d8490e-1aac-4c0e-8780-c06b5c4f1538": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007660886s Apr 23 13:49:03.101: INFO: Pod "pod-projected-secrets-49d8490e-1aac-4c0e-8780-c06b5c4f1538": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012150739s STEP: Saw pod success Apr 23 13:49:03.101: INFO: Pod "pod-projected-secrets-49d8490e-1aac-4c0e-8780-c06b5c4f1538" satisfied condition "success or failure" Apr 23 13:49:03.105: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-49d8490e-1aac-4c0e-8780-c06b5c4f1538 container projected-secret-volume-test: STEP: delete the pod Apr 23 13:49:03.124: INFO: Waiting for pod pod-projected-secrets-49d8490e-1aac-4c0e-8780-c06b5c4f1538 to disappear Apr 23 13:49:03.152: INFO: Pod pod-projected-secrets-49d8490e-1aac-4c0e-8780-c06b5c4f1538 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:49:03.153: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3877" for this suite. Apr 23 13:49:09.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:49:09.236: INFO: namespace projected-3877 deletion completed in 6.079031371s • [SLOW TEST:10.237 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:49:09.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 13:49:09.314: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c754c4c8-ab42-4ba9-9c70-be5260a52b09" in namespace "projected-8987" to be "success or failure" Apr 23 13:49:09.319: INFO: Pod "downwardapi-volume-c754c4c8-ab42-4ba9-9c70-be5260a52b09": Phase="Pending", Reason="", readiness=false. Elapsed: 4.163807ms Apr 23 13:49:11.323: INFO: Pod "downwardapi-volume-c754c4c8-ab42-4ba9-9c70-be5260a52b09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008136878s Apr 23 13:49:13.327: INFO: Pod "downwardapi-volume-c754c4c8-ab42-4ba9-9c70-be5260a52b09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012398508s STEP: Saw pod success Apr 23 13:49:13.327: INFO: Pod "downwardapi-volume-c754c4c8-ab42-4ba9-9c70-be5260a52b09" satisfied condition "success or failure" Apr 23 13:49:13.330: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-c754c4c8-ab42-4ba9-9c70-be5260a52b09 container client-container: STEP: delete the pod Apr 23 13:49:13.363: INFO: Waiting for pod downwardapi-volume-c754c4c8-ab42-4ba9-9c70-be5260a52b09 to disappear Apr 23 13:49:13.378: INFO: Pod downwardapi-volume-c754c4c8-ab42-4ba9-9c70-be5260a52b09 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:49:13.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8987" for this suite. Apr 23 13:49:19.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:49:19.467: INFO: namespace projected-8987 deletion completed in 6.084886739s • [SLOW TEST:10.231 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:49:19.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 23 13:49:19.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-4249' Apr 23 13:49:21.765: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 23 13:49:21.765: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Apr 23 13:49:25.785: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4249' Apr 23 13:49:25.889: INFO: stderr: "" Apr 23 13:49:25.889: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:49:25.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4249" for this suite. Apr 23 13:49:47.905: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:49:47.975: INFO: namespace kubectl-4249 deletion completed in 22.082056097s • [SLOW TEST:28.506 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:49:47.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Apr 23 13:49:48.060: INFO: Waiting up to 5m0s for pod "client-containers-60edf45b-0c9c-48cd-81b4-505dda9f14a5" in namespace "containers-6161" to be "success or failure" Apr 23 13:49:48.070: INFO: Pod "client-containers-60edf45b-0c9c-48cd-81b4-505dda9f14a5": Phase="Pending", Reason="", readiness=false. Elapsed: 9.868518ms Apr 23 13:49:50.073: INFO: Pod "client-containers-60edf45b-0c9c-48cd-81b4-505dda9f14a5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013386094s Apr 23 13:49:52.085: INFO: Pod "client-containers-60edf45b-0c9c-48cd-81b4-505dda9f14a5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025052412s STEP: Saw pod success Apr 23 13:49:52.085: INFO: Pod "client-containers-60edf45b-0c9c-48cd-81b4-505dda9f14a5" satisfied condition "success or failure" Apr 23 13:49:52.088: INFO: Trying to get logs from node iruya-worker2 pod client-containers-60edf45b-0c9c-48cd-81b4-505dda9f14a5 container test-container: STEP: delete the pod Apr 23 13:49:52.122: INFO: Waiting for pod client-containers-60edf45b-0c9c-48cd-81b4-505dda9f14a5 to disappear Apr 23 13:49:52.134: INFO: Pod client-containers-60edf45b-0c9c-48cd-81b4-505dda9f14a5 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:49:52.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6161" for this suite. Apr 23 13:49:58.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:49:58.255: INFO: namespace containers-6161 deletion completed in 6.099574171s • [SLOW TEST:10.280 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:49:58.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-6f119160-94dd-4c9d-89af-a77484105ccc STEP: Creating secret with name secret-projected-all-test-volume-8b243cd0-9df4-434a-a892-8fd652426605 STEP: Creating a pod to test Check all projections for projected volume plugin Apr 23 13:49:58.357: INFO: Waiting up to 5m0s for pod "projected-volume-dcac2634-27bd-466b-9db0-6e2f82f14c85" in namespace "projected-512" to be "success or failure" Apr 23 13:49:58.367: INFO: Pod "projected-volume-dcac2634-27bd-466b-9db0-6e2f82f14c85": Phase="Pending", Reason="", readiness=false. Elapsed: 9.913851ms Apr 23 13:50:00.371: INFO: Pod "projected-volume-dcac2634-27bd-466b-9db0-6e2f82f14c85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013327237s Apr 23 13:50:02.375: INFO: Pod "projected-volume-dcac2634-27bd-466b-9db0-6e2f82f14c85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017496049s STEP: Saw pod success Apr 23 13:50:02.375: INFO: Pod "projected-volume-dcac2634-27bd-466b-9db0-6e2f82f14c85" satisfied condition "success or failure" Apr 23 13:50:02.378: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-dcac2634-27bd-466b-9db0-6e2f82f14c85 container projected-all-volume-test: STEP: delete the pod Apr 23 13:50:02.393: INFO: Waiting for pod projected-volume-dcac2634-27bd-466b-9db0-6e2f82f14c85 to disappear Apr 23 13:50:02.398: INFO: Pod projected-volume-dcac2634-27bd-466b-9db0-6e2f82f14c85 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:50:02.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-512" for this suite. Apr 23 13:50:08.413: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:50:08.486: INFO: namespace projected-512 deletion completed in 6.085529134s • [SLOW TEST:10.231 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:50:08.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 23 13:50:12.570: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:50:12.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5074" for this suite. Apr 23 13:50:18.666: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:50:18.744: INFO: namespace container-runtime-5074 deletion completed in 6.089444433s • [SLOW TEST:10.258 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:50:18.745: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 13:50:18.808: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2b71462-483b-4afa-817e-4e8485a5bdf5" in namespace "downward-api-1620" to be "success or failure" Apr 23 13:50:18.849: INFO: Pod "downwardapi-volume-a2b71462-483b-4afa-817e-4e8485a5bdf5": Phase="Pending", Reason="", readiness=false. Elapsed: 41.200036ms Apr 23 13:50:20.852: INFO: Pod "downwardapi-volume-a2b71462-483b-4afa-817e-4e8485a5bdf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044046193s Apr 23 13:50:22.857: INFO: Pod "downwardapi-volume-a2b71462-483b-4afa-817e-4e8485a5bdf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048855562s STEP: Saw pod success Apr 23 13:50:22.857: INFO: Pod "downwardapi-volume-a2b71462-483b-4afa-817e-4e8485a5bdf5" satisfied condition "success or failure" Apr 23 13:50:22.860: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a2b71462-483b-4afa-817e-4e8485a5bdf5 container client-container: STEP: delete the pod Apr 23 13:50:22.893: INFO: Waiting for pod downwardapi-volume-a2b71462-483b-4afa-817e-4e8485a5bdf5 to disappear Apr 23 13:50:22.909: INFO: Pod downwardapi-volume-a2b71462-483b-4afa-817e-4e8485a5bdf5 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:50:22.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1620" for this suite. Apr 23 13:50:28.931: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:50:29.018: INFO: namespace downward-api-1620 deletion completed in 6.104572548s • [SLOW TEST:10.273 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:50:29.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Apr 23 13:50:29.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8916' Apr 23 13:50:29.354: INFO: stderr: "" Apr 23 13:50:29.354: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 23 13:50:29.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8916' Apr 23 13:50:29.454: INFO: stderr: "" Apr 23 13:50:29.454: INFO: stdout: "update-demo-nautilus-l8ch9 update-demo-nautilus-z89k5 " Apr 23 13:50:29.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8ch9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8916' Apr 23 13:50:29.543: INFO: stderr: "" Apr 23 13:50:29.543: INFO: stdout: "" Apr 23 13:50:29.543: INFO: update-demo-nautilus-l8ch9 is created but not running Apr 23 13:50:34.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8916' Apr 23 13:50:34.637: INFO: stderr: "" Apr 23 13:50:34.637: INFO: stdout: "update-demo-nautilus-l8ch9 update-demo-nautilus-z89k5 " Apr 23 13:50:34.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8ch9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8916' Apr 23 13:50:34.729: INFO: stderr: "" Apr 23 13:50:34.729: INFO: stdout: "true" Apr 23 13:50:34.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l8ch9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8916' Apr 23 13:50:34.829: INFO: stderr: "" Apr 23 13:50:34.829: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 13:50:34.829: INFO: validating pod update-demo-nautilus-l8ch9 Apr 23 13:50:34.833: INFO: got data: { "image": "nautilus.jpg" } Apr 23 13:50:34.833: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 13:50:34.833: INFO: update-demo-nautilus-l8ch9 is verified up and running Apr 23 13:50:34.834: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z89k5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8916' Apr 23 13:50:34.929: INFO: stderr: "" Apr 23 13:50:34.929: INFO: stdout: "true" Apr 23 13:50:34.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-z89k5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8916' Apr 23 13:50:35.021: INFO: stderr: "" Apr 23 13:50:35.021: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 23 13:50:35.021: INFO: validating pod update-demo-nautilus-z89k5 Apr 23 13:50:35.025: INFO: got data: { "image": "nautilus.jpg" } Apr 23 13:50:35.025: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 23 13:50:35.025: INFO: update-demo-nautilus-z89k5 is verified up and running STEP: rolling-update to new replication controller Apr 23 13:50:35.028: INFO: scanned /root for discovery docs: Apr 23 13:50:35.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8916' Apr 23 13:50:57.608: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 23 13:50:57.608: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 23 13:50:57.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8916' Apr 23 13:50:57.710: INFO: stderr: "" Apr 23 13:50:57.710: INFO: stdout: "update-demo-kitten-5zmpv update-demo-kitten-zk7wq " Apr 23 13:50:57.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5zmpv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8916' Apr 23 13:50:57.799: INFO: stderr: "" Apr 23 13:50:57.799: INFO: stdout: "true" Apr 23 13:50:57.799: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-5zmpv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8916' Apr 23 13:50:57.899: INFO: stderr: "" Apr 23 13:50:57.899: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 23 13:50:57.899: INFO: validating pod update-demo-kitten-5zmpv Apr 23 13:50:57.902: INFO: got data: { "image": "kitten.jpg" } Apr 23 13:50:57.902: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 23 13:50:57.902: INFO: update-demo-kitten-5zmpv is verified up and running Apr 23 13:50:57.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zk7wq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8916' Apr 23 13:50:57.989: INFO: stderr: "" Apr 23 13:50:57.989: INFO: stdout: "true" Apr 23 13:50:57.989: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-zk7wq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8916' Apr 23 13:50:58.108: INFO: stderr: "" Apr 23 13:50:58.108: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 23 13:50:58.108: INFO: validating pod update-demo-kitten-zk7wq Apr 23 13:50:58.112: INFO: got data: { "image": "kitten.jpg" } Apr 23 13:50:58.112: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 23 13:50:58.112: INFO: update-demo-kitten-zk7wq is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:50:58.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8916" for this suite. Apr 23 13:51:20.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:51:20.219: INFO: namespace kubectl-8916 deletion completed in 22.103817075s • [SLOW TEST:51.201 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:51:20.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 23 13:51:20.292: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:51:25.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5566" for this suite. Apr 23 13:51:31.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:51:31.972: INFO: namespace init-container-5566 deletion completed in 6.104461169s • [SLOW TEST:11.752 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:51:31.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 23 13:51:32.068: INFO: Waiting up to 5m0s for pod "pod-14359d05-8dbd-4666-8afc-9177559fecdf" in namespace "emptydir-7685" to be "success or failure" Apr 23 13:51:32.086: INFO: Pod "pod-14359d05-8dbd-4666-8afc-9177559fecdf": Phase="Pending", Reason="", readiness=false. Elapsed: 17.344785ms Apr 23 13:51:34.090: INFO: Pod "pod-14359d05-8dbd-4666-8afc-9177559fecdf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021838673s Apr 23 13:51:36.095: INFO: Pod "pod-14359d05-8dbd-4666-8afc-9177559fecdf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026159453s STEP: Saw pod success Apr 23 13:51:36.095: INFO: Pod "pod-14359d05-8dbd-4666-8afc-9177559fecdf" satisfied condition "success or failure" Apr 23 13:51:36.098: INFO: Trying to get logs from node iruya-worker pod pod-14359d05-8dbd-4666-8afc-9177559fecdf container test-container: STEP: delete the pod Apr 23 13:51:36.196: INFO: Waiting for pod pod-14359d05-8dbd-4666-8afc-9177559fecdf to disappear Apr 23 13:51:36.199: INFO: Pod pod-14359d05-8dbd-4666-8afc-9177559fecdf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:51:36.199: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7685" for this suite. Apr 23 13:51:42.214: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:51:42.295: INFO: namespace emptydir-7685 deletion completed in 6.094317879s • [SLOW TEST:10.323 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:51:42.296: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 23 13:51:42.325: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:51:49.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-2083" for this suite. Apr 23 13:51:55.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:51:55.506: INFO: namespace init-container-2083 deletion completed in 6.103603152s • [SLOW TEST:13.210 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:51:55.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Apr 23 13:51:55.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 23 13:51:55.758: INFO: stderr: "" Apr 23 13:51:55.758: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:51:55.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3491" for this suite. Apr 23 13:52:01.777: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:52:01.855: INFO: namespace kubectl-3491 deletion completed in 6.091928169s • [SLOW TEST:6.349 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:52:01.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:52:01.973: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 23 13:52:01.983: INFO: Number of nodes with available pods: 0 Apr 23 13:52:01.983: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 23 13:52:02.031: INFO: Number of nodes with available pods: 0 Apr 23 13:52:02.031: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:52:03.036: INFO: Number of nodes with available pods: 0 Apr 23 13:52:03.036: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:52:04.036: INFO: Number of nodes with available pods: 0 Apr 23 13:52:04.036: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:52:05.036: INFO: Number of nodes with available pods: 0 Apr 23 13:52:05.036: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:52:06.036: INFO: Number of nodes with available pods: 1 Apr 23 13:52:06.036: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 23 13:52:06.068: INFO: Number of nodes with available pods: 1 Apr 23 13:52:06.068: INFO: Number of running nodes: 0, number of available pods: 1 Apr 23 13:52:07.072: INFO: Number of nodes with available pods: 0 Apr 23 13:52:07.072: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 23 13:52:07.085: INFO: Number of nodes with available pods: 0 Apr 23 13:52:07.085: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:52:08.089: INFO: Number of nodes with available pods: 0 Apr 23 13:52:08.089: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:52:09.090: INFO: Number of nodes with available pods: 0 Apr 23 13:52:09.090: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:52:10.090: INFO: Number of nodes with available pods: 0 Apr 23 13:52:10.090: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:52:11.090: INFO: Number of nodes with available pods: 0 Apr 23 13:52:11.090: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:52:12.090: INFO: Number of nodes with available pods: 0 Apr 23 13:52:12.090: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:52:13.090: INFO: Number of nodes with available pods: 0 Apr 23 13:52:13.090: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:52:14.089: INFO: Number of nodes with available pods: 0 Apr 23 13:52:14.089: INFO: Node iruya-worker is running more than one daemon pod Apr 23 13:52:15.090: INFO: Number of nodes with available pods: 1 Apr 23 13:52:15.090: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9731, will wait for the garbage collector to delete the pods Apr 23 13:52:15.156: INFO: Deleting DaemonSet.extensions daemon-set took: 6.803198ms Apr 23 13:52:15.456: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.230862ms Apr 23 13:52:18.466: INFO: Number of nodes with available pods: 0 Apr 23 13:52:18.466: INFO: Number of running nodes: 0, number of available pods: 0 Apr 23 13:52:18.468: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9731/daemonsets","resourceVersion":"7007804"},"items":null} Apr 23 13:52:18.470: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9731/pods","resourceVersion":"7007804"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:52:18.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9731" for this suite. Apr 23 13:52:24.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:52:24.626: INFO: namespace daemonsets-9731 deletion completed in 6.091312083s • [SLOW TEST:22.771 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:52:24.627: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 23 13:52:24.716: INFO: Waiting up to 5m0s for pod "downward-api-f5d5e394-f63a-492f-9058-47d9173ab445" in namespace "downward-api-3683" to be "success or failure" Apr 23 13:52:24.719: INFO: Pod "downward-api-f5d5e394-f63a-492f-9058-47d9173ab445": Phase="Pending", Reason="", readiness=false. Elapsed: 2.49863ms Apr 23 13:52:26.767: INFO: Pod "downward-api-f5d5e394-f63a-492f-9058-47d9173ab445": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051028431s Apr 23 13:52:28.772: INFO: Pod "downward-api-f5d5e394-f63a-492f-9058-47d9173ab445": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.055287732s STEP: Saw pod success Apr 23 13:52:28.772: INFO: Pod "downward-api-f5d5e394-f63a-492f-9058-47d9173ab445" satisfied condition "success or failure" Apr 23 13:52:28.775: INFO: Trying to get logs from node iruya-worker pod downward-api-f5d5e394-f63a-492f-9058-47d9173ab445 container dapi-container: STEP: delete the pod Apr 23 13:52:28.811: INFO: Waiting for pod downward-api-f5d5e394-f63a-492f-9058-47d9173ab445 to disappear Apr 23 13:52:28.833: INFO: Pod downward-api-f5d5e394-f63a-492f-9058-47d9173ab445 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:52:28.833: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3683" for this suite. Apr 23 13:52:34.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:52:34.914: INFO: namespace downward-api-3683 deletion completed in 6.077703371s • [SLOW TEST:10.287 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:52:34.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 23 13:52:39.528: INFO: Successfully updated pod "pod-update-5040f47d-9561-4401-917a-c434679f15e7" STEP: verifying the updated pod is in kubernetes Apr 23 13:52:39.538: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:52:39.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7624" for this suite. Apr 23 13:53:01.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:53:01.625: INFO: namespace pods-7624 deletion completed in 22.084121181s • [SLOW TEST:26.711 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:53:01.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 23 13:53:01.728: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:53:12.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6404" for this suite. Apr 23 13:53:18.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:53:18.284: INFO: namespace pods-6404 deletion completed in 6.090405057s • [SLOW TEST:16.659 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:53:18.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 23 13:53:18.362: INFO: PodSpec: initContainers in spec.initContainers Apr 23 13:54:05.658: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-9ac18bd1-c923-414b-9813-27fa6fca06c6", GenerateName:"", Namespace:"init-container-1922", SelfLink:"/api/v1/namespaces/init-container-1922/pods/pod-init-9ac18bd1-c923-414b-9813-27fa6fca06c6", UID:"4aa5d42c-ec61-414c-9cdf-2706ce1d891d", ResourceVersion:"7008133", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63723246798, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"362302189"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-xdchk", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0033a2000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xdchk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xdchk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-xdchk", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002da4088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002ab2060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002da4120)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002da4140)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002da4148), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002da414c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723246798, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723246798, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723246798, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723246798, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.129", StartTime:(*v1.Time)(0xc002c30060), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001fcc0e0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001fcc150)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://ec3b57b1d56b5a2e9411ec1e531d0ad0b8dd1394d44325bed73d7f6ee671e98a"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c300a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002c30080), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:54:05.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1922" for this suite. Apr 23 13:54:27.719: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:54:27.798: INFO: namespace init-container-1922 deletion completed in 22.126560676s • [SLOW TEST:69.514 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:54:27.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1534 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 23 13:54:27.896: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 23 13:54:53.980: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.177 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1534 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 13:54:53.980: INFO: >>> kubeConfig: /root/.kube/config I0423 13:54:54.017844 6 log.go:172] (0xc000e540b0) (0xc00304caa0) Create stream I0423 13:54:54.017877 6 log.go:172] (0xc000e540b0) (0xc00304caa0) Stream added, broadcasting: 1 I0423 13:54:54.019891 6 log.go:172] (0xc000e540b0) Reply frame received for 1 I0423 13:54:54.019959 6 log.go:172] (0xc000e540b0) (0xc002361860) Create stream I0423 13:54:54.019976 6 log.go:172] (0xc000e540b0) (0xc002361860) Stream added, broadcasting: 3 I0423 13:54:54.020869 6 log.go:172] (0xc000e540b0) Reply frame received for 3 I0423 13:54:54.020908 6 log.go:172] (0xc000e540b0) (0xc00304cb40) Create stream I0423 13:54:54.020936 6 log.go:172] (0xc000e540b0) (0xc00304cb40) Stream added, broadcasting: 5 I0423 13:54:54.022047 6 log.go:172] (0xc000e540b0) Reply frame received for 5 I0423 13:54:55.123823 6 log.go:172] (0xc000e540b0) Data frame received for 3 I0423 13:54:55.123871 6 log.go:172] (0xc002361860) (3) Data frame handling I0423 13:54:55.123900 6 log.go:172] (0xc002361860) (3) Data frame sent I0423 13:54:55.123935 6 log.go:172] (0xc000e540b0) Data frame received for 3 I0423 13:54:55.123948 6 log.go:172] (0xc002361860) (3) Data frame handling I0423 13:54:55.124067 6 log.go:172] (0xc000e540b0) Data frame received for 5 I0423 13:54:55.124094 6 log.go:172] (0xc00304cb40) (5) Data frame handling I0423 13:54:55.126448 6 log.go:172] (0xc000e540b0) Data frame received for 1 I0423 13:54:55.126480 6 log.go:172] (0xc00304caa0) (1) Data frame handling I0423 13:54:55.126499 6 log.go:172] (0xc00304caa0) (1) Data frame sent I0423 13:54:55.126594 6 log.go:172] (0xc000e540b0) (0xc00304caa0) Stream removed, broadcasting: 1 I0423 13:54:55.126726 6 log.go:172] (0xc000e540b0) (0xc00304caa0) Stream removed, broadcasting: 1 I0423 13:54:55.126757 6 log.go:172] (0xc000e540b0) (0xc002361860) Stream removed, broadcasting: 3 I0423 13:54:55.126820 6 log.go:172] (0xc000e540b0) Go away received I0423 13:54:55.126943 6 log.go:172] (0xc000e540b0) (0xc00304cb40) Stream removed, broadcasting: 5 Apr 23 13:54:55.126: INFO: Found all expected endpoints: [netserver-0] Apr 23 13:54:55.130: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.130 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1534 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 13:54:55.130: INFO: >>> kubeConfig: /root/.kube/config I0423 13:54:55.162965 6 log.go:172] (0xc00198c580) (0xc001238320) Create stream I0423 13:54:55.163006 6 log.go:172] (0xc00198c580) (0xc001238320) Stream added, broadcasting: 1 I0423 13:54:55.165654 6 log.go:172] (0xc00198c580) Reply frame received for 1 I0423 13:54:55.165713 6 log.go:172] (0xc00198c580) (0xc001b3f540) Create stream I0423 13:54:55.165757 6 log.go:172] (0xc00198c580) (0xc001b3f540) Stream added, broadcasting: 3 I0423 13:54:55.172021 6 log.go:172] (0xc00198c580) Reply frame received for 3 I0423 13:54:55.172073 6 log.go:172] (0xc00198c580) (0xc00304cdc0) Create stream I0423 13:54:55.172094 6 log.go:172] (0xc00198c580) (0xc00304cdc0) Stream added, broadcasting: 5 I0423 13:54:55.173690 6 log.go:172] (0xc00198c580) Reply frame received for 5 I0423 13:54:56.240769 6 log.go:172] (0xc00198c580) Data frame received for 3 I0423 13:54:56.240816 6 log.go:172] (0xc001b3f540) (3) Data frame handling I0423 13:54:56.240848 6 log.go:172] (0xc001b3f540) (3) Data frame sent I0423 13:54:56.240874 6 log.go:172] (0xc00198c580) Data frame received for 3 I0423 13:54:56.240895 6 log.go:172] (0xc001b3f540) (3) Data frame handling I0423 13:54:56.241040 6 log.go:172] (0xc00198c580) Data frame received for 5 I0423 13:54:56.241076 6 log.go:172] (0xc00304cdc0) (5) Data frame handling I0423 13:54:56.243281 6 log.go:172] (0xc00198c580) Data frame received for 1 I0423 13:54:56.243320 6 log.go:172] (0xc001238320) (1) Data frame handling I0423 13:54:56.243342 6 log.go:172] (0xc001238320) (1) Data frame sent I0423 13:54:56.243369 6 log.go:172] (0xc00198c580) (0xc001238320) Stream removed, broadcasting: 1 I0423 13:54:56.243426 6 log.go:172] (0xc00198c580) Go away received I0423 13:54:56.243493 6 log.go:172] (0xc00198c580) (0xc001238320) Stream removed, broadcasting: 1 I0423 13:54:56.243520 6 log.go:172] (0xc00198c580) (0xc001b3f540) Stream removed, broadcasting: 3 I0423 13:54:56.243538 6 log.go:172] (0xc00198c580) (0xc00304cdc0) Stream removed, broadcasting: 5 Apr 23 13:54:56.243: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:54:56.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1534" for this suite. Apr 23 13:55:18.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:55:18.359: INFO: namespace pod-network-test-1534 deletion completed in 22.111316396s • [SLOW TEST:50.560 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:55:18.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:55:18.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 23 13:55:18.551: INFO: stderr: "" Apr 23 13:55:18.551: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-04-05T10:39:42Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:55:18.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8048" for this suite. Apr 23 13:55:24.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:55:24.648: INFO: namespace kubectl-8048 deletion completed in 6.091979919s • [SLOW TEST:6.288 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:55:24.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3144 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 23 13:55:24.748: INFO: Found 0 stateful pods, waiting for 3 Apr 23 13:55:34.753: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 23 13:55:34.753: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 23 13:55:34.753: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 23 13:55:34.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3144 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 23 13:55:34.990: INFO: stderr: "I0423 13:55:34.882498 1876 log.go:172] (0xc0009989a0) (0xc000960aa0) Create stream\nI0423 13:55:34.882537 1876 log.go:172] (0xc0009989a0) (0xc000960aa0) Stream added, broadcasting: 1\nI0423 13:55:34.885861 1876 log.go:172] (0xc0009989a0) Reply frame received for 1\nI0423 13:55:34.885921 1876 log.go:172] (0xc0009989a0) (0xc000960000) Create stream\nI0423 13:55:34.885945 1876 log.go:172] (0xc0009989a0) (0xc000960000) Stream added, broadcasting: 3\nI0423 13:55:34.887031 1876 log.go:172] (0xc0009989a0) Reply frame received for 3\nI0423 13:55:34.887047 1876 log.go:172] (0xc0009989a0) (0xc0006161e0) Create stream\nI0423 13:55:34.887053 1876 log.go:172] (0xc0009989a0) (0xc0006161e0) Stream added, broadcasting: 5\nI0423 13:55:34.888377 1876 log.go:172] (0xc0009989a0) Reply frame received for 5\nI0423 13:55:34.941033 1876 log.go:172] (0xc0009989a0) Data frame received for 5\nI0423 13:55:34.941059 1876 log.go:172] (0xc0006161e0) (5) Data frame handling\nI0423 13:55:34.941070 1876 log.go:172] (0xc0006161e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0423 13:55:34.983564 1876 log.go:172] (0xc0009989a0) Data frame received for 5\nI0423 13:55:34.983616 1876 log.go:172] (0xc0006161e0) (5) Data frame handling\nI0423 13:55:34.983644 1876 log.go:172] (0xc0009989a0) Data frame received for 3\nI0423 13:55:34.983664 1876 log.go:172] (0xc000960000) (3) Data frame handling\nI0423 13:55:34.983694 1876 log.go:172] (0xc000960000) (3) Data frame sent\nI0423 13:55:34.983709 1876 log.go:172] (0xc0009989a0) Data frame received for 3\nI0423 13:55:34.983721 1876 log.go:172] (0xc000960000) (3) Data frame handling\nI0423 13:55:34.985532 1876 log.go:172] (0xc0009989a0) Data frame received for 1\nI0423 13:55:34.985553 1876 log.go:172] (0xc000960aa0) (1) Data frame handling\nI0423 13:55:34.985562 1876 log.go:172] (0xc000960aa0) (1) Data frame sent\nI0423 13:55:34.985572 1876 log.go:172] (0xc0009989a0) (0xc000960aa0) Stream removed, broadcasting: 1\nI0423 13:55:34.985626 1876 log.go:172] (0xc0009989a0) Go away received\nI0423 13:55:34.985847 1876 log.go:172] (0xc0009989a0) (0xc000960aa0) Stream removed, broadcasting: 1\nI0423 13:55:34.985858 1876 log.go:172] (0xc0009989a0) (0xc000960000) Stream removed, broadcasting: 3\nI0423 13:55:34.985864 1876 log.go:172] (0xc0009989a0) (0xc0006161e0) Stream removed, broadcasting: 5\n" Apr 23 13:55:34.990: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 23 13:55:34.990: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 23 13:55:45.052: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 23 13:55:55.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3144 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 13:55:55.309: INFO: stderr: "I0423 13:55:55.242281 1894 log.go:172] (0xc00012a6e0) (0xc00032a820) Create stream\nI0423 13:55:55.242369 1894 log.go:172] (0xc00012a6e0) (0xc00032a820) Stream added, broadcasting: 1\nI0423 13:55:55.245401 1894 log.go:172] (0xc00012a6e0) Reply frame received for 1\nI0423 13:55:55.245483 1894 log.go:172] (0xc00012a6e0) (0xc0008b0000) Create stream\nI0423 13:55:55.245511 1894 log.go:172] (0xc00012a6e0) (0xc0008b0000) Stream added, broadcasting: 3\nI0423 13:55:55.246668 1894 log.go:172] (0xc00012a6e0) Reply frame received for 3\nI0423 13:55:55.246707 1894 log.go:172] (0xc00012a6e0) (0xc0008e8000) Create stream\nI0423 13:55:55.246735 1894 log.go:172] (0xc00012a6e0) (0xc0008e8000) Stream added, broadcasting: 5\nI0423 13:55:55.247687 1894 log.go:172] (0xc00012a6e0) Reply frame received for 5\nI0423 13:55:55.299841 1894 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0423 13:55:55.299894 1894 log.go:172] (0xc0008e8000) (5) Data frame handling\nI0423 13:55:55.299909 1894 log.go:172] (0xc0008e8000) (5) Data frame sent\nI0423 13:55:55.299923 1894 log.go:172] (0xc00012a6e0) Data frame received for 5\nI0423 13:55:55.299942 1894 log.go:172] (0xc0008e8000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0423 13:55:55.299974 1894 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0423 13:55:55.299998 1894 log.go:172] (0xc0008b0000) (3) Data frame handling\nI0423 13:55:55.300025 1894 log.go:172] (0xc0008b0000) (3) Data frame sent\nI0423 13:55:55.300042 1894 log.go:172] (0xc00012a6e0) Data frame received for 3\nI0423 13:55:55.300052 1894 log.go:172] (0xc0008b0000) (3) Data frame handling\nI0423 13:55:55.302264 1894 log.go:172] (0xc00012a6e0) Data frame received for 1\nI0423 13:55:55.302306 1894 log.go:172] (0xc00032a820) (1) Data frame handling\nI0423 13:55:55.302326 1894 log.go:172] (0xc00032a820) (1) Data frame sent\nI0423 13:55:55.302360 1894 log.go:172] (0xc00012a6e0) (0xc00032a820) Stream removed, broadcasting: 1\nI0423 13:55:55.302414 1894 log.go:172] (0xc00012a6e0) Go away received\nI0423 13:55:55.303052 1894 log.go:172] (0xc00012a6e0) (0xc00032a820) Stream removed, broadcasting: 1\nI0423 13:55:55.303074 1894 log.go:172] (0xc00012a6e0) (0xc0008b0000) Stream removed, broadcasting: 3\nI0423 13:55:55.303089 1894 log.go:172] (0xc00012a6e0) (0xc0008e8000) Stream removed, broadcasting: 5\n" Apr 23 13:55:55.309: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 23 13:55:55.309: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 23 13:56:15.345: INFO: Waiting for StatefulSet statefulset-3144/ss2 to complete update Apr 23 13:56:15.345: INFO: Waiting for Pod statefulset-3144/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Apr 23 13:56:25.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3144 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 23 13:56:25.593: INFO: stderr: "I0423 13:56:25.490196 1914 log.go:172] (0xc00099a370) (0xc000a0e6e0) Create stream\nI0423 13:56:25.490252 1914 log.go:172] (0xc00099a370) (0xc000a0e6e0) Stream added, broadcasting: 1\nI0423 13:56:25.492462 1914 log.go:172] (0xc00099a370) Reply frame received for 1\nI0423 13:56:25.492508 1914 log.go:172] (0xc00099a370) (0xc0006c6280) Create stream\nI0423 13:56:25.492525 1914 log.go:172] (0xc00099a370) (0xc0006c6280) Stream added, broadcasting: 3\nI0423 13:56:25.493781 1914 log.go:172] (0xc00099a370) Reply frame received for 3\nI0423 13:56:25.493809 1914 log.go:172] (0xc00099a370) (0xc000a0e780) Create stream\nI0423 13:56:25.493817 1914 log.go:172] (0xc00099a370) (0xc000a0e780) Stream added, broadcasting: 5\nI0423 13:56:25.494786 1914 log.go:172] (0xc00099a370) Reply frame received for 5\nI0423 13:56:25.556489 1914 log.go:172] (0xc00099a370) Data frame received for 5\nI0423 13:56:25.556520 1914 log.go:172] (0xc000a0e780) (5) Data frame handling\nI0423 13:56:25.556536 1914 log.go:172] (0xc000a0e780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0423 13:56:25.585972 1914 log.go:172] (0xc00099a370) Data frame received for 3\nI0423 13:56:25.586091 1914 log.go:172] (0xc0006c6280) (3) Data frame handling\nI0423 13:56:25.586165 1914 log.go:172] (0xc0006c6280) (3) Data frame sent\nI0423 13:56:25.586190 1914 log.go:172] (0xc00099a370) Data frame received for 3\nI0423 13:56:25.586204 1914 log.go:172] (0xc0006c6280) (3) Data frame handling\nI0423 13:56:25.586222 1914 log.go:172] (0xc00099a370) Data frame received for 5\nI0423 13:56:25.586234 1914 log.go:172] (0xc000a0e780) (5) Data frame handling\nI0423 13:56:25.587830 1914 log.go:172] (0xc00099a370) Data frame received for 1\nI0423 13:56:25.587856 1914 log.go:172] (0xc000a0e6e0) (1) Data frame handling\nI0423 13:56:25.587879 1914 log.go:172] (0xc000a0e6e0) (1) Data frame sent\nI0423 13:56:25.587909 1914 log.go:172] (0xc00099a370) (0xc000a0e6e0) Stream removed, broadcasting: 1\nI0423 13:56:25.587938 1914 log.go:172] (0xc00099a370) Go away received\nI0423 13:56:25.588266 1914 log.go:172] (0xc00099a370) (0xc000a0e6e0) Stream removed, broadcasting: 1\nI0423 13:56:25.588287 1914 log.go:172] (0xc00099a370) (0xc0006c6280) Stream removed, broadcasting: 3\nI0423 13:56:25.588294 1914 log.go:172] (0xc00099a370) (0xc000a0e780) Stream removed, broadcasting: 5\n" Apr 23 13:56:25.593: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 23 13:56:25.593: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 23 13:56:35.623: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 23 13:56:45.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3144 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 13:56:45.865: INFO: stderr: "I0423 13:56:45.783158 1935 log.go:172] (0xc000130e70) (0xc00032a820) Create stream\nI0423 13:56:45.783222 1935 log.go:172] (0xc000130e70) (0xc00032a820) Stream added, broadcasting: 1\nI0423 13:56:45.784903 1935 log.go:172] (0xc000130e70) Reply frame received for 1\nI0423 13:56:45.784943 1935 log.go:172] (0xc000130e70) (0xc00032a8c0) Create stream\nI0423 13:56:45.784961 1935 log.go:172] (0xc000130e70) (0xc00032a8c0) Stream added, broadcasting: 3\nI0423 13:56:45.785737 1935 log.go:172] (0xc000130e70) Reply frame received for 3\nI0423 13:56:45.785783 1935 log.go:172] (0xc000130e70) (0xc000906000) Create stream\nI0423 13:56:45.785798 1935 log.go:172] (0xc000130e70) (0xc000906000) Stream added, broadcasting: 5\nI0423 13:56:45.786385 1935 log.go:172] (0xc000130e70) Reply frame received for 5\nI0423 13:56:45.860119 1935 log.go:172] (0xc000130e70) Data frame received for 5\nI0423 13:56:45.860293 1935 log.go:172] (0xc000906000) (5) Data frame handling\nI0423 13:56:45.860316 1935 log.go:172] (0xc000906000) (5) Data frame sent\nI0423 13:56:45.860329 1935 log.go:172] (0xc000130e70) Data frame received for 5\nI0423 13:56:45.860334 1935 log.go:172] (0xc000906000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0423 13:56:45.860569 1935 log.go:172] (0xc000130e70) Data frame received for 3\nI0423 13:56:45.860599 1935 log.go:172] (0xc00032a8c0) (3) Data frame handling\nI0423 13:56:45.860611 1935 log.go:172] (0xc00032a8c0) (3) Data frame sent\nI0423 13:56:45.860623 1935 log.go:172] (0xc000130e70) Data frame received for 3\nI0423 13:56:45.860638 1935 log.go:172] (0xc00032a8c0) (3) Data frame handling\nI0423 13:56:45.861689 1935 log.go:172] (0xc000130e70) Data frame received for 1\nI0423 13:56:45.861705 1935 log.go:172] (0xc00032a820) (1) Data frame handling\nI0423 13:56:45.861718 1935 log.go:172] (0xc00032a820) (1) Data frame sent\nI0423 13:56:45.861729 1935 log.go:172] (0xc000130e70) (0xc00032a820) Stream removed, broadcasting: 1\nI0423 13:56:45.861746 1935 log.go:172] (0xc000130e70) Go away received\nI0423 13:56:45.862016 1935 log.go:172] (0xc000130e70) (0xc00032a820) Stream removed, broadcasting: 1\nI0423 13:56:45.862030 1935 log.go:172] (0xc000130e70) (0xc00032a8c0) Stream removed, broadcasting: 3\nI0423 13:56:45.862037 1935 log.go:172] (0xc000130e70) (0xc000906000) Stream removed, broadcasting: 5\n" Apr 23 13:56:45.866: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 23 13:56:45.866: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 23 13:57:15.882: INFO: Deleting all statefulset in ns statefulset-3144 Apr 23 13:57:15.885: INFO: Scaling statefulset ss2 to 0 Apr 23 13:57:35.899: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 13:57:35.902: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:57:35.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3144" for this suite. Apr 23 13:57:41.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:57:42.145: INFO: namespace statefulset-3144 deletion completed in 6.225502665s • [SLOW TEST:137.497 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:57:42.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-e5dd80d8-2824-4c4f-ac73-851821acf647 STEP: Creating a pod to test consume configMaps Apr 23 13:57:42.227: INFO: Waiting up to 5m0s for pod "pod-configmaps-41949d93-6ee3-4a0d-99cf-413630571ad7" in namespace "configmap-348" to be "success or failure" Apr 23 13:57:42.230: INFO: Pod "pod-configmaps-41949d93-6ee3-4a0d-99cf-413630571ad7": Phase="Pending", Reason="", readiness=false. Elapsed: 3.080529ms Apr 23 13:57:44.233: INFO: Pod "pod-configmaps-41949d93-6ee3-4a0d-99cf-413630571ad7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006770412s Apr 23 13:57:46.238: INFO: Pod "pod-configmaps-41949d93-6ee3-4a0d-99cf-413630571ad7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011388815s STEP: Saw pod success Apr 23 13:57:46.238: INFO: Pod "pod-configmaps-41949d93-6ee3-4a0d-99cf-413630571ad7" satisfied condition "success or failure" Apr 23 13:57:46.241: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-41949d93-6ee3-4a0d-99cf-413630571ad7 container configmap-volume-test: STEP: delete the pod Apr 23 13:57:46.262: INFO: Waiting for pod pod-configmaps-41949d93-6ee3-4a0d-99cf-413630571ad7 to disappear Apr 23 13:57:46.266: INFO: Pod pod-configmaps-41949d93-6ee3-4a0d-99cf-413630571ad7 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:57:46.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-348" for this suite. Apr 23 13:57:52.281: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:57:52.359: INFO: namespace configmap-348 deletion completed in 6.09084041s • [SLOW TEST:10.214 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:57:52.360: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Apr 23 13:57:52.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6849' Apr 23 13:57:52.703: INFO: stderr: "" Apr 23 13:57:52.703: INFO: stdout: "pod/pause created\n" Apr 23 13:57:52.703: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 23 13:57:52.703: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6849" to be "running and ready" Apr 23 13:57:52.709: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 5.95075ms Apr 23 13:57:54.713: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010126259s Apr 23 13:57:56.717: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.014111479s Apr 23 13:57:56.717: INFO: Pod "pause" satisfied condition "running and ready" Apr 23 13:57:56.718: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Apr 23 13:57:56.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6849' Apr 23 13:57:56.818: INFO: stderr: "" Apr 23 13:57:56.818: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 23 13:57:56.818: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6849' Apr 23 13:57:56.915: INFO: stderr: "" Apr 23 13:57:56.915: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 23 13:57:56.915: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6849' Apr 23 13:57:57.016: INFO: stderr: "" Apr 23 13:57:57.016: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 23 13:57:57.016: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6849' Apr 23 13:57:57.131: INFO: stderr: "" Apr 23 13:57:57.131: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Apr 23 13:57:57.132: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6849' Apr 23 13:57:57.272: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 13:57:57.272: INFO: stdout: "pod \"pause\" force deleted\n" Apr 23 13:57:57.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6849' Apr 23 13:57:57.382: INFO: stderr: "No resources found.\n" Apr 23 13:57:57.382: INFO: stdout: "" Apr 23 13:57:57.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6849 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 23 13:57:57.478: INFO: stderr: "" Apr 23 13:57:57.478: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:57:57.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6849" for this suite. Apr 23 13:58:03.528: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:58:03.604: INFO: namespace kubectl-6849 deletion completed in 6.122986812s • [SLOW TEST:11.244 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:58:03.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:58:07.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8435" for this suite. Apr 23 13:58:45.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:58:45.803: INFO: namespace kubelet-test-8435 deletion completed in 38.09504004s • [SLOW TEST:42.198 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:58:45.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 23 13:58:45.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-3591' Apr 23 13:58:45.959: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 23 13:58:45.959: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 23 13:58:45.999: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-pwhcv] Apr 23 13:58:45.999: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-pwhcv" in namespace "kubectl-3591" to be "running and ready" Apr 23 13:58:46.001: INFO: Pod "e2e-test-nginx-rc-pwhcv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.698598ms Apr 23 13:58:48.005: INFO: Pod "e2e-test-nginx-rc-pwhcv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006709449s Apr 23 13:58:50.010: INFO: Pod "e2e-test-nginx-rc-pwhcv": Phase="Running", Reason="", readiness=true. Elapsed: 4.010948896s Apr 23 13:58:50.010: INFO: Pod "e2e-test-nginx-rc-pwhcv" satisfied condition "running and ready" Apr 23 13:58:50.010: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-pwhcv] Apr 23 13:58:50.010: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-3591' Apr 23 13:58:50.159: INFO: stderr: "" Apr 23 13:58:50.159: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Apr 23 13:58:50.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-3591' Apr 23 13:58:50.255: INFO: stderr: "" Apr 23 13:58:50.255: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:58:50.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3591" for this suite. Apr 23 13:59:12.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:59:12.350: INFO: namespace kubectl-3591 deletion completed in 22.092070839s • [SLOW TEST:26.547 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:59:12.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 13:59:12.433: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5b14206c-fe42-48c9-91c4-e99fb9765f76" in namespace "projected-2975" to be "success or failure" Apr 23 13:59:12.436: INFO: Pod "downwardapi-volume-5b14206c-fe42-48c9-91c4-e99fb9765f76": Phase="Pending", Reason="", readiness=false. Elapsed: 3.617756ms Apr 23 13:59:14.462: INFO: Pod "downwardapi-volume-5b14206c-fe42-48c9-91c4-e99fb9765f76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029320811s Apr 23 13:59:16.466: INFO: Pod "downwardapi-volume-5b14206c-fe42-48c9-91c4-e99fb9765f76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033304445s STEP: Saw pod success Apr 23 13:59:16.466: INFO: Pod "downwardapi-volume-5b14206c-fe42-48c9-91c4-e99fb9765f76" satisfied condition "success or failure" Apr 23 13:59:16.470: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-5b14206c-fe42-48c9-91c4-e99fb9765f76 container client-container: STEP: delete the pod Apr 23 13:59:16.486: INFO: Waiting for pod downwardapi-volume-5b14206c-fe42-48c9-91c4-e99fb9765f76 to disappear Apr 23 13:59:16.490: INFO: Pod downwardapi-volume-5b14206c-fe42-48c9-91c4-e99fb9765f76 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:59:16.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2975" for this suite. Apr 23 13:59:22.518: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:59:22.599: INFO: namespace projected-2975 deletion completed in 6.10599412s • [SLOW TEST:10.248 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:59:22.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 13:59:22.660: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ad6321df-b97f-4ada-b1df-4800b704f9ea" in namespace "projected-6210" to be "success or failure" Apr 23 13:59:22.679: INFO: Pod "downwardapi-volume-ad6321df-b97f-4ada-b1df-4800b704f9ea": Phase="Pending", Reason="", readiness=false. Elapsed: 19.026916ms Apr 23 13:59:24.695: INFO: Pod "downwardapi-volume-ad6321df-b97f-4ada-b1df-4800b704f9ea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035267199s Apr 23 13:59:26.700: INFO: Pod "downwardapi-volume-ad6321df-b97f-4ada-b1df-4800b704f9ea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.039689755s STEP: Saw pod success Apr 23 13:59:26.700: INFO: Pod "downwardapi-volume-ad6321df-b97f-4ada-b1df-4800b704f9ea" satisfied condition "success or failure" Apr 23 13:59:26.703: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ad6321df-b97f-4ada-b1df-4800b704f9ea container client-container: STEP: delete the pod Apr 23 13:59:26.734: INFO: Waiting for pod downwardapi-volume-ad6321df-b97f-4ada-b1df-4800b704f9ea to disappear Apr 23 13:59:26.741: INFO: Pod downwardapi-volume-ad6321df-b97f-4ada-b1df-4800b704f9ea no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:59:26.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6210" for this suite. Apr 23 13:59:32.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:59:32.842: INFO: namespace projected-6210 deletion completed in 6.096430268s • [SLOW TEST:10.243 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:59:32.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 13:59:32.897: INFO: Waiting up to 5m0s for pod "downwardapi-volume-38ae6116-8092-4c37-9123-b3c624ef5a22" in namespace "downward-api-6035" to be "success or failure" Apr 23 13:59:32.915: INFO: Pod "downwardapi-volume-38ae6116-8092-4c37-9123-b3c624ef5a22": Phase="Pending", Reason="", readiness=false. Elapsed: 18.200976ms Apr 23 13:59:34.919: INFO: Pod "downwardapi-volume-38ae6116-8092-4c37-9123-b3c624ef5a22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021537432s Apr 23 13:59:36.923: INFO: Pod "downwardapi-volume-38ae6116-8092-4c37-9123-b3c624ef5a22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026029709s STEP: Saw pod success Apr 23 13:59:36.923: INFO: Pod "downwardapi-volume-38ae6116-8092-4c37-9123-b3c624ef5a22" satisfied condition "success or failure" Apr 23 13:59:36.927: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-38ae6116-8092-4c37-9123-b3c624ef5a22 container client-container: STEP: delete the pod Apr 23 13:59:36.964: INFO: Waiting for pod downwardapi-volume-38ae6116-8092-4c37-9123-b3c624ef5a22 to disappear Apr 23 13:59:36.987: INFO: Pod downwardapi-volume-38ae6116-8092-4c37-9123-b3c624ef5a22 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:59:36.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6035" for this suite. Apr 23 13:59:43.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:59:43.077: INFO: namespace downward-api-6035 deletion completed in 6.0873673s • [SLOW TEST:10.236 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:59:43.078: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8fbf8419-86c0-4bc6-8f7a-9932b8bd12b2 STEP: Creating a pod to test consume secrets Apr 23 13:59:43.212: INFO: Waiting up to 5m0s for pod "pod-secrets-dda32aee-1004-47f8-bb4e-1b794d0c542f" in namespace "secrets-2026" to be "success or failure" Apr 23 13:59:43.215: INFO: Pod "pod-secrets-dda32aee-1004-47f8-bb4e-1b794d0c542f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.887494ms Apr 23 13:59:45.219: INFO: Pod "pod-secrets-dda32aee-1004-47f8-bb4e-1b794d0c542f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006817837s Apr 23 13:59:47.224: INFO: Pod "pod-secrets-dda32aee-1004-47f8-bb4e-1b794d0c542f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011581112s STEP: Saw pod success Apr 23 13:59:47.224: INFO: Pod "pod-secrets-dda32aee-1004-47f8-bb4e-1b794d0c542f" satisfied condition "success or failure" Apr 23 13:59:47.227: INFO: Trying to get logs from node iruya-worker pod pod-secrets-dda32aee-1004-47f8-bb4e-1b794d0c542f container secret-env-test: STEP: delete the pod Apr 23 13:59:47.296: INFO: Waiting for pod pod-secrets-dda32aee-1004-47f8-bb4e-1b794d0c542f to disappear Apr 23 13:59:47.299: INFO: Pod pod-secrets-dda32aee-1004-47f8-bb4e-1b794d0c542f no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:59:47.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2026" for this suite. Apr 23 13:59:53.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 13:59:53.417: INFO: namespace secrets-2026 deletion completed in 6.11386355s • [SLOW TEST:10.339 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 13:59:53.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 13:59:53.493: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 23 13:59:55.532: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 13:59:56.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3352" for this suite. Apr 23 14:00:02.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:00:02.679: INFO: namespace replication-controller-3352 deletion completed in 6.107196442s • [SLOW TEST:9.262 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:00:02.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-cbjtq in namespace proxy-4622 I0423 14:00:02.827684 6 runners.go:180] Created replication controller with name: proxy-service-cbjtq, namespace: proxy-4622, replica count: 1 I0423 14:00:03.878133 6 runners.go:180] proxy-service-cbjtq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 14:00:04.878309 6 runners.go:180] proxy-service-cbjtq Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0423 14:00:05.878516 6 runners.go:180] proxy-service-cbjtq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0423 14:00:06.878712 6 runners.go:180] proxy-service-cbjtq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0423 14:00:07.878944 6 runners.go:180] proxy-service-cbjtq Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0423 14:00:08.879157 6 runners.go:180] proxy-service-cbjtq Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 23 14:00:08.882: INFO: setup took 6.108373729s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 23 14:00:08.890: INFO: (0) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 7.11653ms) Apr 23 14:00:08.891: INFO: (0) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 8.264636ms) Apr 23 14:00:08.891: INFO: (0) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 8.245046ms) Apr 23 14:00:08.891: INFO: (0) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 8.33107ms) Apr 23 14:00:08.891: INFO: (0) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 8.167407ms) Apr 23 14:00:08.891: INFO: (0) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 8.555632ms) Apr 23 14:00:08.891: INFO: (0) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname1/proxy/: foo (200; 8.911593ms) Apr 23 14:00:08.892: INFO: (0) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname2/proxy/: bar (200; 9.452787ms) Apr 23 14:00:08.892: INFO: (0) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname1/proxy/: foo (200; 9.591042ms) Apr 23 14:00:08.892: INFO: (0) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname2/proxy/: bar (200; 9.584796ms) Apr 23 14:00:08.897: INFO: (0) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:1080/proxy/: test<... (200; 14.00131ms) Apr 23 14:00:08.898: INFO: (0) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:462/proxy/: tls qux (200; 15.422016ms) Apr 23 14:00:08.898: INFO: (0) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 15.450557ms) Apr 23 14:00:08.898: INFO: (0) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname2/proxy/: tls qux (200; 15.738077ms) Apr 23 14:00:08.899: INFO: (0) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname1/proxy/: tls baz (200; 16.711396ms) Apr 23 14:00:08.900: INFO: (0) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: test<... (200; 5.071562ms) Apr 23 14:00:08.905: INFO: (1) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 5.07226ms) Apr 23 14:00:08.905: INFO: (1) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 5.05905ms) Apr 23 14:00:08.905: INFO: (1) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 5.169032ms) Apr 23 14:00:08.905: INFO: (1) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 5.129033ms) Apr 23 14:00:08.905: INFO: (1) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 5.092538ms) Apr 23 14:00:08.905: INFO: (1) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:462/proxy/: tls qux (200; 5.202179ms) Apr 23 14:00:08.905: INFO: (1) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 5.299008ms) Apr 23 14:00:08.905: INFO: (1) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 5.478937ms) Apr 23 14:00:08.905: INFO: (1) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: test<... (200; 5.013366ms) Apr 23 14:00:08.912: INFO: (2) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 5.331951ms) Apr 23 14:00:08.912: INFO: (2) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname1/proxy/: tls baz (200; 5.395046ms) Apr 23 14:00:08.912: INFO: (2) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 5.267509ms) Apr 23 14:00:08.913: INFO: (2) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 5.746438ms) Apr 23 14:00:08.913: INFO: (2) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 5.788997ms) Apr 23 14:00:08.913: INFO: (2) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: test (200; 4.352701ms) Apr 23 14:00:08.919: INFO: (3) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname1/proxy/: foo (200; 4.80934ms) Apr 23 14:00:08.919: INFO: (3) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname2/proxy/: bar (200; 4.80823ms) Apr 23 14:00:08.919: INFO: (3) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname1/proxy/: tls baz (200; 4.909124ms) Apr 23 14:00:08.919: INFO: (3) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname2/proxy/: bar (200; 5.070001ms) Apr 23 14:00:08.919: INFO: (3) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 5.373835ms) Apr 23 14:00:08.919: INFO: (3) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:1080/proxy/: test<... (200; 5.412765ms) Apr 23 14:00:08.920: INFO: (3) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 6.062497ms) Apr 23 14:00:08.920: INFO: (3) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 6.17428ms) Apr 23 14:00:08.920: INFO: (3) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname1/proxy/: foo (200; 6.129585ms) Apr 23 14:00:08.920: INFO: (3) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 6.05997ms) Apr 23 14:00:08.920: INFO: (3) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: test<... (200; 4.941255ms) Apr 23 14:00:08.926: INFO: (4) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 4.910577ms) Apr 23 14:00:08.926: INFO: (4) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 5.006186ms) Apr 23 14:00:08.926: INFO: (4) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 5.441058ms) Apr 23 14:00:08.926: INFO: (4) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:462/proxy/: tls qux (200; 5.444012ms) Apr 23 14:00:08.926: INFO: (4) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname1/proxy/: foo (200; 5.584969ms) Apr 23 14:00:08.926: INFO: (4) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 5.536329ms) Apr 23 14:00:08.927: INFO: (4) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname1/proxy/: tls baz (200; 5.792668ms) Apr 23 14:00:08.927: INFO: (4) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname2/proxy/: bar (200; 5.860548ms) Apr 23 14:00:08.927: INFO: (4) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname2/proxy/: bar (200; 5.894106ms) Apr 23 14:00:08.930: INFO: (5) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 3.192836ms) Apr 23 14:00:08.932: INFO: (5) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:1080/proxy/: test<... (200; 5.196058ms) Apr 23 14:00:08.932: INFO: (5) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:462/proxy/: tls qux (200; 5.117321ms) Apr 23 14:00:08.932: INFO: (5) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: test (200; 5.292203ms) Apr 23 14:00:08.932: INFO: (5) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 5.279321ms) Apr 23 14:00:08.932: INFO: (5) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 5.455245ms) Apr 23 14:00:08.932: INFO: (5) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 5.524249ms) Apr 23 14:00:08.932: INFO: (5) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 5.401764ms) Apr 23 14:00:08.932: INFO: (5) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname1/proxy/: foo (200; 5.466914ms) Apr 23 14:00:08.932: INFO: (5) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname2/proxy/: bar (200; 5.461773ms) Apr 23 14:00:08.935: INFO: (6) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 2.215814ms) Apr 23 14:00:08.935: INFO: (6) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: ... (200; 4.372837ms) Apr 23 14:00:08.937: INFO: (6) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname2/proxy/: bar (200; 4.465275ms) Apr 23 14:00:08.937: INFO: (6) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 4.467336ms) Apr 23 14:00:08.937: INFO: (6) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 4.610131ms) Apr 23 14:00:08.937: INFO: (6) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:1080/proxy/: test<... (200; 4.481761ms) Apr 23 14:00:08.937: INFO: (6) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname1/proxy/: tls baz (200; 4.583932ms) Apr 23 14:00:08.937: INFO: (6) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 4.546074ms) Apr 23 14:00:08.937: INFO: (6) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname2/proxy/: tls qux (200; 4.561496ms) Apr 23 14:00:08.937: INFO: (6) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 4.584291ms) Apr 23 14:00:08.937: INFO: (6) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:462/proxy/: tls qux (200; 4.678831ms) Apr 23 14:00:08.937: INFO: (6) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 4.648214ms) Apr 23 14:00:08.938: INFO: (6) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname2/proxy/: bar (200; 5.136767ms) Apr 23 14:00:08.938: INFO: (6) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname1/proxy/: foo (200; 5.224297ms) Apr 23 14:00:08.938: INFO: (6) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname1/proxy/: foo (200; 5.330792ms) Apr 23 14:00:08.941: INFO: (7) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 3.311274ms) Apr 23 14:00:08.941: INFO: (7) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 3.465812ms) Apr 23 14:00:08.941: INFO: (7) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:462/proxy/: tls qux (200; 3.47286ms) Apr 23 14:00:08.942: INFO: (7) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 3.800732ms) Apr 23 14:00:08.942: INFO: (7) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 3.909949ms) Apr 23 14:00:08.942: INFO: (7) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 3.998864ms) Apr 23 14:00:08.942: INFO: (7) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 3.996676ms) Apr 23 14:00:08.942: INFO: (7) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 4.099969ms) Apr 23 14:00:08.942: INFO: (7) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:1080/proxy/: test<... (200; 4.081219ms) Apr 23 14:00:08.942: INFO: (7) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname1/proxy/: foo (200; 4.298874ms) Apr 23 14:00:08.942: INFO: (7) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname2/proxy/: bar (200; 4.231492ms) Apr 23 14:00:08.943: INFO: (7) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname2/proxy/: tls qux (200; 4.60333ms) Apr 23 14:00:08.943: INFO: (7) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname1/proxy/: tls baz (200; 4.601901ms) Apr 23 14:00:08.943: INFO: (7) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: test<... (200; 5.140509ms) Apr 23 14:00:08.948: INFO: (8) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 5.270986ms) Apr 23 14:00:08.948: INFO: (8) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 5.271356ms) Apr 23 14:00:08.948: INFO: (8) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 5.726506ms) Apr 23 14:00:08.949: INFO: (8) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 5.783112ms) Apr 23 14:00:08.949: INFO: (8) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 5.786077ms) Apr 23 14:00:08.949: INFO: (8) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname2/proxy/: bar (200; 6.438303ms) Apr 23 14:00:08.949: INFO: (8) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 6.407562ms) Apr 23 14:00:08.949: INFO: (8) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname1/proxy/: tls baz (200; 6.574446ms) Apr 23 14:00:08.949: INFO: (8) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname1/proxy/: foo (200; 6.543211ms) Apr 23 14:00:08.949: INFO: (8) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname2/proxy/: tls qux (200; 6.638147ms) Apr 23 14:00:08.949: INFO: (8) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname1/proxy/: foo (200; 6.825619ms) Apr 23 14:00:08.950: INFO: (8) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname2/proxy/: bar (200; 6.992923ms) Apr 23 14:00:08.953: INFO: (9) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:1080/proxy/: test<... (200; 3.341237ms) Apr 23 14:00:08.954: INFO: (9) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 3.704412ms) Apr 23 14:00:08.954: INFO: (9) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 3.717908ms) Apr 23 14:00:08.954: INFO: (9) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 3.842321ms) Apr 23 14:00:08.954: INFO: (9) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 3.915241ms) Apr 23 14:00:08.954: INFO: (9) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 3.859162ms) Apr 23 14:00:08.954: INFO: (9) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 4.103721ms) Apr 23 14:00:08.954: INFO: (9) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: test<... (200; 3.469836ms) Apr 23 14:00:08.959: INFO: (10) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname2/proxy/: bar (200; 3.848942ms) Apr 23 14:00:08.960: INFO: (10) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 4.263412ms) Apr 23 14:00:08.960: INFO: (10) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 4.547282ms) Apr 23 14:00:08.960: INFO: (10) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname2/proxy/: tls qux (200; 4.550746ms) Apr 23 14:00:08.960: INFO: (10) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 4.546515ms) Apr 23 14:00:08.960: INFO: (10) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 4.607318ms) Apr 23 14:00:08.960: INFO: (10) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname1/proxy/: foo (200; 4.649994ms) Apr 23 14:00:08.960: INFO: (10) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname1/proxy/: tls baz (200; 4.726629ms) Apr 23 14:00:08.960: INFO: (10) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname1/proxy/: foo (200; 4.688101ms) Apr 23 14:00:08.960: INFO: (10) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: ... (200; 4.708343ms) Apr 23 14:00:08.960: INFO: (10) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname2/proxy/: bar (200; 4.666765ms) Apr 23 14:00:08.962: INFO: (10) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 6.31102ms) Apr 23 14:00:08.966: INFO: (11) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 4.13848ms) Apr 23 14:00:08.966: INFO: (11) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:462/proxy/: tls qux (200; 4.121166ms) Apr 23 14:00:08.966: INFO: (11) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname2/proxy/: bar (200; 4.54206ms) Apr 23 14:00:08.967: INFO: (11) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname2/proxy/: bar (200; 4.571137ms) Apr 23 14:00:08.967: INFO: (11) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname1/proxy/: tls baz (200; 4.899445ms) Apr 23 14:00:08.967: INFO: (11) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 5.020606ms) Apr 23 14:00:08.967: INFO: (11) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname1/proxy/: foo (200; 5.1725ms) Apr 23 14:00:08.967: INFO: (11) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 5.223991ms) Apr 23 14:00:08.968: INFO: (11) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 5.6314ms) Apr 23 14:00:08.968: INFO: (11) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: ... (200; 5.631715ms) Apr 23 14:00:08.968: INFO: (11) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname2/proxy/: tls qux (200; 5.59956ms) Apr 23 14:00:08.968: INFO: (11) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 5.617095ms) Apr 23 14:00:08.968: INFO: (11) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 5.684331ms) Apr 23 14:00:08.968: INFO: (11) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:1080/proxy/: test<... (200; 5.744774ms) Apr 23 14:00:08.968: INFO: (11) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname1/proxy/: foo (200; 5.623996ms) Apr 23 14:00:08.971: INFO: (12) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 3.245341ms) Apr 23 14:00:08.971: INFO: (12) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 3.416653ms) Apr 23 14:00:08.971: INFO: (12) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:1080/proxy/: test<... (200; 3.541243ms) Apr 23 14:00:08.971: INFO: (12) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: ... (200; 3.696383ms) Apr 23 14:00:08.971: INFO: (12) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:462/proxy/: tls qux (200; 3.708343ms) Apr 23 14:00:08.971: INFO: (12) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 3.69516ms) Apr 23 14:00:08.971: INFO: (12) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 3.673453ms) Apr 23 14:00:08.972: INFO: (12) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 3.967016ms) Apr 23 14:00:08.972: INFO: (12) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname1/proxy/: foo (200; 4.015472ms) Apr 23 14:00:08.972: INFO: (12) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 3.958126ms) Apr 23 14:00:08.972: INFO: (12) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname1/proxy/: foo (200; 4.26609ms) Apr 23 14:00:08.972: INFO: (12) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname2/proxy/: tls qux (200; 4.246469ms) Apr 23 14:00:08.972: INFO: (12) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname1/proxy/: tls baz (200; 4.284108ms) Apr 23 14:00:08.972: INFO: (12) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname2/proxy/: bar (200; 4.270134ms) Apr 23 14:00:08.972: INFO: (12) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname2/proxy/: bar (200; 4.358883ms) Apr 23 14:00:08.974: INFO: (13) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 2.220252ms) Apr 23 14:00:08.975: INFO: (13) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:1080/proxy/: test<... (200; 3.212553ms) Apr 23 14:00:08.976: INFO: (13) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 3.410781ms) Apr 23 14:00:08.976: INFO: (13) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 4.092898ms) Apr 23 14:00:08.977: INFO: (13) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 4.673476ms) Apr 23 14:00:08.977: INFO: (13) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:462/proxy/: tls qux (200; 4.656793ms) Apr 23 14:00:08.977: INFO: (13) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 4.750446ms) Apr 23 14:00:08.977: INFO: (13) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 4.727098ms) Apr 23 14:00:08.977: INFO: (13) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname1/proxy/: foo (200; 4.774011ms) Apr 23 14:00:08.977: INFO: (13) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 4.887611ms) Apr 23 14:00:08.977: INFO: (13) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname2/proxy/: tls qux (200; 4.908593ms) Apr 23 14:00:08.977: INFO: (13) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname2/proxy/: bar (200; 5.044361ms) Apr 23 14:00:08.977: INFO: (13) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: test<... (200; 3.11385ms) Apr 23 14:00:08.981: INFO: (14) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 3.129334ms) Apr 23 14:00:08.981: INFO: (14) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: test (200; 4.219199ms) Apr 23 14:00:08.982: INFO: (14) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 4.220299ms) Apr 23 14:00:08.982: INFO: (14) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 4.492508ms) Apr 23 14:00:08.982: INFO: (14) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname1/proxy/: foo (200; 4.52039ms) Apr 23 14:00:08.982: INFO: (14) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname1/proxy/: foo (200; 4.453794ms) Apr 23 14:00:08.982: INFO: (14) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname2/proxy/: tls qux (200; 4.454263ms) Apr 23 14:00:08.982: INFO: (14) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 4.495881ms) Apr 23 14:00:08.982: INFO: (14) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname2/proxy/: bar (200; 4.673021ms) Apr 23 14:00:08.982: INFO: (14) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname1/proxy/: tls baz (200; 4.730431ms) Apr 23 14:00:08.982: INFO: (14) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:462/proxy/: tls qux (200; 4.764268ms) Apr 23 14:00:08.982: INFO: (14) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname2/proxy/: bar (200; 4.733902ms) Apr 23 14:00:08.984: INFO: (15) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 1.962317ms) Apr 23 14:00:08.986: INFO: (15) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 3.383877ms) Apr 23 14:00:08.987: INFO: (15) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:1080/proxy/: test<... (200; 4.014492ms) Apr 23 14:00:08.987: INFO: (15) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 4.321938ms) Apr 23 14:00:08.987: INFO: (15) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname1/proxy/: foo (200; 4.245022ms) Apr 23 14:00:08.987: INFO: (15) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 4.265871ms) Apr 23 14:00:08.987: INFO: (15) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname2/proxy/: tls qux (200; 4.486704ms) Apr 23 14:00:08.987: INFO: (15) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 4.457809ms) Apr 23 14:00:08.987: INFO: (15) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:462/proxy/: tls qux (200; 4.598694ms) Apr 23 14:00:08.987: INFO: (15) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: test<... (200; 5.244653ms) Apr 23 14:00:08.993: INFO: (16) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 5.282634ms) Apr 23 14:00:08.993: INFO: (16) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname2/proxy/: bar (200; 5.26003ms) Apr 23 14:00:08.993: INFO: (16) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname1/proxy/: tls baz (200; 5.217081ms) Apr 23 14:00:08.993: INFO: (16) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 5.25769ms) Apr 23 14:00:08.993: INFO: (16) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 5.244626ms) Apr 23 14:00:08.993: INFO: (16) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname1/proxy/: foo (200; 5.336812ms) Apr 23 14:00:08.993: INFO: (16) /api/v1/namespaces/proxy-4622/services/http:proxy-service-cbjtq:portname2/proxy/: bar (200; 5.2511ms) Apr 23 14:00:08.993: INFO: (16) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:462/proxy/: tls qux (200; 5.271883ms) Apr 23 14:00:08.993: INFO: (16) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 5.33634ms) Apr 23 14:00:08.993: INFO: (16) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname2/proxy/: tls qux (200; 5.289084ms) Apr 23 14:00:08.997: INFO: (17) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 3.60731ms) Apr 23 14:00:08.997: INFO: (17) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 3.752287ms) Apr 23 14:00:08.997: INFO: (17) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 3.765732ms) Apr 23 14:00:08.997: INFO: (17) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 3.72186ms) Apr 23 14:00:08.997: INFO: (17) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 3.837362ms) Apr 23 14:00:08.997: INFO: (17) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 3.752096ms) Apr 23 14:00:08.997: INFO: (17) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:1080/proxy/: test<... (200; 3.921662ms) Apr 23 14:00:08.998: INFO: (17) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:462/proxy/: tls qux (200; 4.049064ms) Apr 23 14:00:08.998: INFO: (17) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: test<... (200; 4.245284ms) Apr 23 14:00:09.003: INFO: (18) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname2/proxy/: bar (200; 4.456605ms) Apr 23 14:00:09.004: INFO: (18) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 4.482975ms) Apr 23 14:00:09.004: INFO: (18) /api/v1/namespaces/proxy-4622/services/proxy-service-cbjtq:portname1/proxy/: foo (200; 4.570238ms) Apr 23 14:00:09.004: INFO: (18) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 4.587828ms) Apr 23 14:00:09.004: INFO: (18) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 4.737353ms) Apr 23 14:00:09.004: INFO: (18) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname2/proxy/: tls qux (200; 4.787935ms) Apr 23 14:00:09.004: INFO: (18) /api/v1/namespaces/proxy-4622/services/https:proxy-service-cbjtq:tlsportname1/proxy/: tls baz (200; 4.793992ms) Apr 23 14:00:09.004: INFO: (18) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 4.81174ms) Apr 23 14:00:09.004: INFO: (18) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:462/proxy/: tls qux (200; 4.855209ms) Apr 23 14:00:09.007: INFO: (19) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z/proxy/: test (200; 3.305873ms) Apr 23 14:00:09.007: INFO: (19) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 3.378065ms) Apr 23 14:00:09.007: INFO: (19) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:162/proxy/: bar (200; 3.360447ms) Apr 23 14:00:09.007: INFO: (19) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:460/proxy/: tls baz (200; 3.407491ms) Apr 23 14:00:09.007: INFO: (19) /api/v1/namespaces/proxy-4622/pods/http:proxy-service-cbjtq-26r5z:1080/proxy/: ... (200; 3.444768ms) Apr 23 14:00:09.007: INFO: (19) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:1080/proxy/: test<... (200; 3.451757ms) Apr 23 14:00:09.007: INFO: (19) /api/v1/namespaces/proxy-4622/pods/proxy-service-cbjtq-26r5z:160/proxy/: foo (200; 3.438642ms) Apr 23 14:00:09.007: INFO: (19) /api/v1/namespaces/proxy-4622/pods/https:proxy-service-cbjtq-26r5z:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 14:00:28.152: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b319e6d6-49eb-4f08-89af-83c2583ce5ef" in namespace "projected-6267" to be "success or failure" Apr 23 14:00:28.170: INFO: Pod "downwardapi-volume-b319e6d6-49eb-4f08-89af-83c2583ce5ef": Phase="Pending", Reason="", readiness=false. Elapsed: 18.001258ms Apr 23 14:00:30.174: INFO: Pod "downwardapi-volume-b319e6d6-49eb-4f08-89af-83c2583ce5ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022002394s Apr 23 14:00:32.179: INFO: Pod "downwardapi-volume-b319e6d6-49eb-4f08-89af-83c2583ce5ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026490656s STEP: Saw pod success Apr 23 14:00:32.179: INFO: Pod "downwardapi-volume-b319e6d6-49eb-4f08-89af-83c2583ce5ef" satisfied condition "success or failure" Apr 23 14:00:32.181: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-b319e6d6-49eb-4f08-89af-83c2583ce5ef container client-container: STEP: delete the pod Apr 23 14:00:32.213: INFO: Waiting for pod downwardapi-volume-b319e6d6-49eb-4f08-89af-83c2583ce5ef to disappear Apr 23 14:00:32.247: INFO: Pod downwardapi-volume-b319e6d6-49eb-4f08-89af-83c2583ce5ef no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:00:32.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6267" for this suite. Apr 23 14:00:38.291: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:00:38.365: INFO: namespace projected-6267 deletion completed in 6.112248406s • [SLOW TEST:10.302 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:00:38.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-6872 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 23 14:00:38.448: INFO: Found 0 stateful pods, waiting for 3 Apr 23 14:00:48.453: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 23 14:00:48.453: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 23 14:00:48.454: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 23 14:00:48.480: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 23 14:00:59.015: INFO: Updating stateful set ss2 Apr 23 14:00:59.047: INFO: Waiting for Pod statefulset-6872/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Apr 23 14:01:09.191: INFO: Found 2 stateful pods, waiting for 3 Apr 23 14:01:19.196: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 23 14:01:19.196: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 23 14:01:19.196: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 23 14:01:19.221: INFO: Updating stateful set ss2 Apr 23 14:01:19.236: INFO: Waiting for Pod statefulset-6872/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 23 14:01:29.245: INFO: Waiting for Pod statefulset-6872/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 23 14:01:39.261: INFO: Updating stateful set ss2 Apr 23 14:01:39.286: INFO: Waiting for StatefulSet statefulset-6872/ss2 to complete update Apr 23 14:01:39.286: INFO: Waiting for Pod statefulset-6872/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 23 14:01:49.295: INFO: Deleting all statefulset in ns statefulset-6872 Apr 23 14:01:49.299: INFO: Scaling statefulset ss2 to 0 Apr 23 14:02:19.319: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 14:02:19.322: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:02:19.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6872" for this suite. Apr 23 14:02:25.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:02:25.425: INFO: namespace statefulset-6872 deletion completed in 6.081503405s • [SLOW TEST:107.059 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:02:25.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 23 14:02:30.066: INFO: Successfully updated pod "labelsupdate60f555f0-b75b-426a-a8a7-4141236a4287" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:02:34.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6617" for this suite. Apr 23 14:02:56.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:02:56.188: INFO: namespace projected-6617 deletion completed in 22.094341761s • [SLOW TEST:30.762 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:02:56.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 23 14:02:56.299: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5715,SelfLink:/api/v1/namespaces/watch-5715/configmaps/e2e-watch-test-watch-closed,UID:44b57fd4-1071-4876-89f5-fbf62178b824,ResourceVersion:7010218,Generation:0,CreationTimestamp:2020-04-23 14:02:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 23 14:02:56.300: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5715,SelfLink:/api/v1/namespaces/watch-5715/configmaps/e2e-watch-test-watch-closed,UID:44b57fd4-1071-4876-89f5-fbf62178b824,ResourceVersion:7010219,Generation:0,CreationTimestamp:2020-04-23 14:02:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 23 14:02:56.335: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5715,SelfLink:/api/v1/namespaces/watch-5715/configmaps/e2e-watch-test-watch-closed,UID:44b57fd4-1071-4876-89f5-fbf62178b824,ResourceVersion:7010220,Generation:0,CreationTimestamp:2020-04-23 14:02:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 23 14:02:56.336: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5715,SelfLink:/api/v1/namespaces/watch-5715/configmaps/e2e-watch-test-watch-closed,UID:44b57fd4-1071-4876-89f5-fbf62178b824,ResourceVersion:7010221,Generation:0,CreationTimestamp:2020-04-23 14:02:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:02:56.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5715" for this suite. Apr 23 14:03:02.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:03:02.424: INFO: namespace watch-5715 deletion completed in 6.084619366s • [SLOW TEST:6.236 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:03:02.425: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 14:03:24.513: INFO: Container started at 2020-04-23 14:03:04 +0000 UTC, pod became ready at 2020-04-23 14:03:24 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:03:24.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5042" for this suite. Apr 23 14:03:46.541: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:03:46.614: INFO: namespace container-probe-5042 deletion completed in 22.096832756s • [SLOW TEST:44.190 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:03:46.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-10c4dc04-8bdc-4dd5-8566-3167afee02bf STEP: Creating a pod to test consume secrets Apr 23 14:03:46.707: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-893ec133-df6e-441b-8a0c-160a5f376445" in namespace "projected-7156" to be "success or failure" Apr 23 14:03:46.711: INFO: Pod "pod-projected-secrets-893ec133-df6e-441b-8a0c-160a5f376445": Phase="Pending", Reason="", readiness=false. Elapsed: 3.868908ms Apr 23 14:03:48.715: INFO: Pod "pod-projected-secrets-893ec133-df6e-441b-8a0c-160a5f376445": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007828058s Apr 23 14:03:50.720: INFO: Pod "pod-projected-secrets-893ec133-df6e-441b-8a0c-160a5f376445": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012494719s STEP: Saw pod success Apr 23 14:03:50.720: INFO: Pod "pod-projected-secrets-893ec133-df6e-441b-8a0c-160a5f376445" satisfied condition "success or failure" Apr 23 14:03:50.723: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-893ec133-df6e-441b-8a0c-160a5f376445 container secret-volume-test: STEP: delete the pod Apr 23 14:03:50.748: INFO: Waiting for pod pod-projected-secrets-893ec133-df6e-441b-8a0c-160a5f376445 to disappear Apr 23 14:03:50.752: INFO: Pod pod-projected-secrets-893ec133-df6e-441b-8a0c-160a5f376445 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:03:50.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7156" for this suite. Apr 23 14:03:56.800: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:03:56.878: INFO: namespace projected-7156 deletion completed in 6.121650111s • [SLOW TEST:10.262 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:03:56.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 23 14:03:56.966: INFO: Waiting up to 5m0s for pod "pod-450afb74-fbc7-422a-bc46-a046a6bc227d" in namespace "emptydir-2875" to be "success or failure" Apr 23 14:03:56.980: INFO: Pod "pod-450afb74-fbc7-422a-bc46-a046a6bc227d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.476056ms Apr 23 14:03:58.983: INFO: Pod "pod-450afb74-fbc7-422a-bc46-a046a6bc227d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017048161s Apr 23 14:04:00.987: INFO: Pod "pod-450afb74-fbc7-422a-bc46-a046a6bc227d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021073602s STEP: Saw pod success Apr 23 14:04:00.987: INFO: Pod "pod-450afb74-fbc7-422a-bc46-a046a6bc227d" satisfied condition "success or failure" Apr 23 14:04:00.990: INFO: Trying to get logs from node iruya-worker2 pod pod-450afb74-fbc7-422a-bc46-a046a6bc227d container test-container: STEP: delete the pod Apr 23 14:04:01.012: INFO: Waiting for pod pod-450afb74-fbc7-422a-bc46-a046a6bc227d to disappear Apr 23 14:04:01.033: INFO: Pod pod-450afb74-fbc7-422a-bc46-a046a6bc227d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:04:01.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2875" for this suite. Apr 23 14:04:07.069: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:04:07.142: INFO: namespace emptydir-2875 deletion completed in 6.104692843s • [SLOW TEST:10.264 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:04:07.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-42fb42d9-c93f-48c6-b712-c5703f53620b STEP: Creating a pod to test consume configMaps Apr 23 14:04:07.251: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-d0151a04-da8e-4630-af6f-94de903cb3de" in namespace "projected-6192" to be "success or failure" Apr 23 14:04:07.270: INFO: Pod "pod-projected-configmaps-d0151a04-da8e-4630-af6f-94de903cb3de": Phase="Pending", Reason="", readiness=false. Elapsed: 18.1558ms Apr 23 14:04:09.274: INFO: Pod "pod-projected-configmaps-d0151a04-da8e-4630-af6f-94de903cb3de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022617458s Apr 23 14:04:11.277: INFO: Pod "pod-projected-configmaps-d0151a04-da8e-4630-af6f-94de903cb3de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025940076s STEP: Saw pod success Apr 23 14:04:11.277: INFO: Pod "pod-projected-configmaps-d0151a04-da8e-4630-af6f-94de903cb3de" satisfied condition "success or failure" Apr 23 14:04:11.279: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-d0151a04-da8e-4630-af6f-94de903cb3de container projected-configmap-volume-test: STEP: delete the pod Apr 23 14:04:11.335: INFO: Waiting for pod pod-projected-configmaps-d0151a04-da8e-4630-af6f-94de903cb3de to disappear Apr 23 14:04:11.385: INFO: Pod pod-projected-configmaps-d0151a04-da8e-4630-af6f-94de903cb3de no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:04:11.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6192" for this suite. Apr 23 14:04:17.410: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:04:17.474: INFO: namespace projected-6192 deletion completed in 6.085715619s • [SLOW TEST:10.332 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:04:17.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Apr 23 14:04:17.510: INFO: Waiting up to 5m0s for pod "client-containers-36b13f46-0886-4652-a6b5-1fb9c13197df" in namespace "containers-1820" to be "success or failure" Apr 23 14:04:17.531: INFO: Pod "client-containers-36b13f46-0886-4652-a6b5-1fb9c13197df": Phase="Pending", Reason="", readiness=false. Elapsed: 21.448338ms Apr 23 14:04:19.535: INFO: Pod "client-containers-36b13f46-0886-4652-a6b5-1fb9c13197df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025129192s Apr 23 14:04:21.539: INFO: Pod "client-containers-36b13f46-0886-4652-a6b5-1fb9c13197df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029204545s STEP: Saw pod success Apr 23 14:04:21.539: INFO: Pod "client-containers-36b13f46-0886-4652-a6b5-1fb9c13197df" satisfied condition "success or failure" Apr 23 14:04:21.542: INFO: Trying to get logs from node iruya-worker pod client-containers-36b13f46-0886-4652-a6b5-1fb9c13197df container test-container: STEP: delete the pod Apr 23 14:04:21.579: INFO: Waiting for pod client-containers-36b13f46-0886-4652-a6b5-1fb9c13197df to disappear Apr 23 14:04:21.591: INFO: Pod client-containers-36b13f46-0886-4652-a6b5-1fb9c13197df no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:04:21.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-1820" for this suite. Apr 23 14:04:27.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:04:27.702: INFO: namespace containers-1820 deletion completed in 6.106639068s • [SLOW TEST:10.227 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:04:27.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 23 14:04:27.759: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9584,SelfLink:/api/v1/namespaces/watch-9584/configmaps/e2e-watch-test-configmap-a,UID:91705316-29ee-47ad-a416-f84fa2d428c3,ResourceVersion:7010533,Generation:0,CreationTimestamp:2020-04-23 14:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 23 14:04:27.759: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9584,SelfLink:/api/v1/namespaces/watch-9584/configmaps/e2e-watch-test-configmap-a,UID:91705316-29ee-47ad-a416-f84fa2d428c3,ResourceVersion:7010533,Generation:0,CreationTimestamp:2020-04-23 14:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 23 14:04:37.766: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9584,SelfLink:/api/v1/namespaces/watch-9584/configmaps/e2e-watch-test-configmap-a,UID:91705316-29ee-47ad-a416-f84fa2d428c3,ResourceVersion:7010553,Generation:0,CreationTimestamp:2020-04-23 14:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 23 14:04:37.767: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9584,SelfLink:/api/v1/namespaces/watch-9584/configmaps/e2e-watch-test-configmap-a,UID:91705316-29ee-47ad-a416-f84fa2d428c3,ResourceVersion:7010553,Generation:0,CreationTimestamp:2020-04-23 14:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 23 14:04:47.775: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9584,SelfLink:/api/v1/namespaces/watch-9584/configmaps/e2e-watch-test-configmap-a,UID:91705316-29ee-47ad-a416-f84fa2d428c3,ResourceVersion:7010573,Generation:0,CreationTimestamp:2020-04-23 14:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 23 14:04:47.775: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9584,SelfLink:/api/v1/namespaces/watch-9584/configmaps/e2e-watch-test-configmap-a,UID:91705316-29ee-47ad-a416-f84fa2d428c3,ResourceVersion:7010573,Generation:0,CreationTimestamp:2020-04-23 14:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 23 14:04:57.783: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9584,SelfLink:/api/v1/namespaces/watch-9584/configmaps/e2e-watch-test-configmap-a,UID:91705316-29ee-47ad-a416-f84fa2d428c3,ResourceVersion:7010593,Generation:0,CreationTimestamp:2020-04-23 14:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 23 14:04:57.783: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-9584,SelfLink:/api/v1/namespaces/watch-9584/configmaps/e2e-watch-test-configmap-a,UID:91705316-29ee-47ad-a416-f84fa2d428c3,ResourceVersion:7010593,Generation:0,CreationTimestamp:2020-04-23 14:04:27 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 23 14:05:07.790: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9584,SelfLink:/api/v1/namespaces/watch-9584/configmaps/e2e-watch-test-configmap-b,UID:eb51c5b6-29e5-4b8f-a2a3-dec837170e37,ResourceVersion:7010613,Generation:0,CreationTimestamp:2020-04-23 14:05:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 23 14:05:07.790: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9584,SelfLink:/api/v1/namespaces/watch-9584/configmaps/e2e-watch-test-configmap-b,UID:eb51c5b6-29e5-4b8f-a2a3-dec837170e37,ResourceVersion:7010613,Generation:0,CreationTimestamp:2020-04-23 14:05:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 23 14:05:17.796: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9584,SelfLink:/api/v1/namespaces/watch-9584/configmaps/e2e-watch-test-configmap-b,UID:eb51c5b6-29e5-4b8f-a2a3-dec837170e37,ResourceVersion:7010634,Generation:0,CreationTimestamp:2020-04-23 14:05:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 23 14:05:17.796: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-9584,SelfLink:/api/v1/namespaces/watch-9584/configmaps/e2e-watch-test-configmap-b,UID:eb51c5b6-29e5-4b8f-a2a3-dec837170e37,ResourceVersion:7010634,Generation:0,CreationTimestamp:2020-04-23 14:05:07 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:05:27.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9584" for this suite. Apr 23 14:05:33.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:05:33.915: INFO: namespace watch-9584 deletion completed in 6.112843696s • [SLOW TEST:66.213 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:05:33.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 23 14:05:33.990: INFO: Waiting up to 5m0s for pod "downward-api-5ddc6d9a-763f-406c-8421-beb8b7d0b1ef" in namespace "downward-api-4712" to be "success or failure" Apr 23 14:05:34.060: INFO: Pod "downward-api-5ddc6d9a-763f-406c-8421-beb8b7d0b1ef": Phase="Pending", Reason="", readiness=false. Elapsed: 69.67302ms Apr 23 14:05:36.065: INFO: Pod "downward-api-5ddc6d9a-763f-406c-8421-beb8b7d0b1ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074441746s Apr 23 14:05:38.069: INFO: Pod "downward-api-5ddc6d9a-763f-406c-8421-beb8b7d0b1ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079276166s STEP: Saw pod success Apr 23 14:05:38.069: INFO: Pod "downward-api-5ddc6d9a-763f-406c-8421-beb8b7d0b1ef" satisfied condition "success or failure" Apr 23 14:05:38.073: INFO: Trying to get logs from node iruya-worker2 pod downward-api-5ddc6d9a-763f-406c-8421-beb8b7d0b1ef container dapi-container: STEP: delete the pod Apr 23 14:05:38.092: INFO: Waiting for pod downward-api-5ddc6d9a-763f-406c-8421-beb8b7d0b1ef to disappear Apr 23 14:05:38.107: INFO: Pod downward-api-5ddc6d9a-763f-406c-8421-beb8b7d0b1ef no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:05:38.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4712" for this suite. Apr 23 14:05:44.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:05:44.206: INFO: namespace downward-api-4712 deletion completed in 6.095668439s • [SLOW TEST:10.291 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:05:44.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5610 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 23 14:05:44.271: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 23 14:06:02.393: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.199:8080/dial?request=hostName&protocol=http&host=10.244.1.198&port=8080&tries=1'] Namespace:pod-network-test-5610 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 14:06:02.393: INFO: >>> kubeConfig: /root/.kube/config I0423 14:06:02.434075 6 log.go:172] (0xc000037810) (0xc001a363c0) Create stream I0423 14:06:02.434110 6 log.go:172] (0xc000037810) (0xc001a363c0) Stream added, broadcasting: 1 I0423 14:06:02.436834 6 log.go:172] (0xc000037810) Reply frame received for 1 I0423 14:06:02.436885 6 log.go:172] (0xc000037810) (0xc0021d0280) Create stream I0423 14:06:02.436902 6 log.go:172] (0xc000037810) (0xc0021d0280) Stream added, broadcasting: 3 I0423 14:06:02.438254 6 log.go:172] (0xc000037810) Reply frame received for 3 I0423 14:06:02.438307 6 log.go:172] (0xc000037810) (0xc001d0c0a0) Create stream I0423 14:06:02.438326 6 log.go:172] (0xc000037810) (0xc001d0c0a0) Stream added, broadcasting: 5 I0423 14:06:02.439365 6 log.go:172] (0xc000037810) Reply frame received for 5 I0423 14:06:02.538109 6 log.go:172] (0xc000037810) Data frame received for 3 I0423 14:06:02.538151 6 log.go:172] (0xc0021d0280) (3) Data frame handling I0423 14:06:02.538178 6 log.go:172] (0xc0021d0280) (3) Data frame sent I0423 14:06:02.538617 6 log.go:172] (0xc000037810) Data frame received for 3 I0423 14:06:02.538641 6 log.go:172] (0xc0021d0280) (3) Data frame handling I0423 14:06:02.538824 6 log.go:172] (0xc000037810) Data frame received for 5 I0423 14:06:02.538849 6 log.go:172] (0xc001d0c0a0) (5) Data frame handling I0423 14:06:02.540575 6 log.go:172] (0xc000037810) Data frame received for 1 I0423 14:06:02.540660 6 log.go:172] (0xc001a363c0) (1) Data frame handling I0423 14:06:02.540696 6 log.go:172] (0xc001a363c0) (1) Data frame sent I0423 14:06:02.540722 6 log.go:172] (0xc000037810) (0xc001a363c0) Stream removed, broadcasting: 1 I0423 14:06:02.540753 6 log.go:172] (0xc000037810) Go away received I0423 14:06:02.540817 6 log.go:172] (0xc000037810) (0xc001a363c0) Stream removed, broadcasting: 1 I0423 14:06:02.540833 6 log.go:172] (0xc000037810) (0xc0021d0280) Stream removed, broadcasting: 3 I0423 14:06:02.540843 6 log.go:172] (0xc000037810) (0xc001d0c0a0) Stream removed, broadcasting: 5 Apr 23 14:06:02.540: INFO: Waiting for endpoints: map[] Apr 23 14:06:02.544: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.199:8080/dial?request=hostName&protocol=http&host=10.244.2.149&port=8080&tries=1'] Namespace:pod-network-test-5610 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 14:06:02.544: INFO: >>> kubeConfig: /root/.kube/config I0423 14:06:02.579791 6 log.go:172] (0xc00061e8f0) (0xc001a36640) Create stream I0423 14:06:02.579823 6 log.go:172] (0xc00061e8f0) (0xc001a36640) Stream added, broadcasting: 1 I0423 14:06:02.581804 6 log.go:172] (0xc00061e8f0) Reply frame received for 1 I0423 14:06:02.581832 6 log.go:172] (0xc00061e8f0) (0xc0021d0500) Create stream I0423 14:06:02.581847 6 log.go:172] (0xc00061e8f0) (0xc0021d0500) Stream added, broadcasting: 3 I0423 14:06:02.582445 6 log.go:172] (0xc00061e8f0) Reply frame received for 3 I0423 14:06:02.582470 6 log.go:172] (0xc00061e8f0) (0xc0021d06e0) Create stream I0423 14:06:02.582479 6 log.go:172] (0xc00061e8f0) (0xc0021d06e0) Stream added, broadcasting: 5 I0423 14:06:02.583237 6 log.go:172] (0xc00061e8f0) Reply frame received for 5 I0423 14:06:02.656896 6 log.go:172] (0xc00061e8f0) Data frame received for 3 I0423 14:06:02.656917 6 log.go:172] (0xc0021d0500) (3) Data frame handling I0423 14:06:02.656928 6 log.go:172] (0xc0021d0500) (3) Data frame sent I0423 14:06:02.658080 6 log.go:172] (0xc00061e8f0) Data frame received for 3 I0423 14:06:02.658109 6 log.go:172] (0xc0021d0500) (3) Data frame handling I0423 14:06:02.658128 6 log.go:172] (0xc00061e8f0) Data frame received for 5 I0423 14:06:02.658138 6 log.go:172] (0xc0021d06e0) (5) Data frame handling I0423 14:06:02.659856 6 log.go:172] (0xc00061e8f0) Data frame received for 1 I0423 14:06:02.659869 6 log.go:172] (0xc001a36640) (1) Data frame handling I0423 14:06:02.659877 6 log.go:172] (0xc001a36640) (1) Data frame sent I0423 14:06:02.659892 6 log.go:172] (0xc00061e8f0) (0xc001a36640) Stream removed, broadcasting: 1 I0423 14:06:02.659919 6 log.go:172] (0xc00061e8f0) Go away received I0423 14:06:02.660034 6 log.go:172] (0xc00061e8f0) (0xc001a36640) Stream removed, broadcasting: 1 I0423 14:06:02.660064 6 log.go:172] (0xc00061e8f0) (0xc0021d0500) Stream removed, broadcasting: 3 I0423 14:06:02.660081 6 log.go:172] (0xc00061e8f0) (0xc0021d06e0) Stream removed, broadcasting: 5 Apr 23 14:06:02.660: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:06:02.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5610" for this suite. Apr 23 14:06:24.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:06:24.773: INFO: namespace pod-network-test-5610 deletion completed in 22.109558081s • [SLOW TEST:40.566 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:06:24.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-3477964a-d38f-42bd-8a09-1bcb3e76bc5f in namespace container-probe-8794 Apr 23 14:06:28.870: INFO: Started pod busybox-3477964a-d38f-42bd-8a09-1bcb3e76bc5f in namespace container-probe-8794 STEP: checking the pod's current state and verifying that restartCount is present Apr 23 14:06:28.872: INFO: Initial restart count of pod busybox-3477964a-d38f-42bd-8a09-1bcb3e76bc5f is 0 Apr 23 14:07:18.984: INFO: Restart count of pod container-probe-8794/busybox-3477964a-d38f-42bd-8a09-1bcb3e76bc5f is now 1 (50.11112561s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:07:18.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8794" for this suite. Apr 23 14:07:25.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:07:25.117: INFO: namespace container-probe-8794 deletion completed in 6.112001607s • [SLOW TEST:60.343 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:07:25.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-4478 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 23 14:07:25.167: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 23 14:07:49.311: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.200:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4478 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 14:07:49.311: INFO: >>> kubeConfig: /root/.kube/config I0423 14:07:49.346193 6 log.go:172] (0xc000fbc630) (0xc002f3cdc0) Create stream I0423 14:07:49.346221 6 log.go:172] (0xc000fbc630) (0xc002f3cdc0) Stream added, broadcasting: 1 I0423 14:07:49.348765 6 log.go:172] (0xc000fbc630) Reply frame received for 1 I0423 14:07:49.348827 6 log.go:172] (0xc000fbc630) (0xc0026cc280) Create stream I0423 14:07:49.348853 6 log.go:172] (0xc000fbc630) (0xc0026cc280) Stream added, broadcasting: 3 I0423 14:07:49.350120 6 log.go:172] (0xc000fbc630) Reply frame received for 3 I0423 14:07:49.350172 6 log.go:172] (0xc000fbc630) (0xc0026cc320) Create stream I0423 14:07:49.350185 6 log.go:172] (0xc000fbc630) (0xc0026cc320) Stream added, broadcasting: 5 I0423 14:07:49.351180 6 log.go:172] (0xc000fbc630) Reply frame received for 5 I0423 14:07:49.428817 6 log.go:172] (0xc000fbc630) Data frame received for 5 I0423 14:07:49.428852 6 log.go:172] (0xc0026cc320) (5) Data frame handling I0423 14:07:49.428884 6 log.go:172] (0xc000fbc630) Data frame received for 3 I0423 14:07:49.428906 6 log.go:172] (0xc0026cc280) (3) Data frame handling I0423 14:07:49.428939 6 log.go:172] (0xc0026cc280) (3) Data frame sent I0423 14:07:49.428961 6 log.go:172] (0xc000fbc630) Data frame received for 3 I0423 14:07:49.428982 6 log.go:172] (0xc0026cc280) (3) Data frame handling I0423 14:07:49.430385 6 log.go:172] (0xc000fbc630) Data frame received for 1 I0423 14:07:49.430416 6 log.go:172] (0xc002f3cdc0) (1) Data frame handling I0423 14:07:49.430431 6 log.go:172] (0xc002f3cdc0) (1) Data frame sent I0423 14:07:49.430470 6 log.go:172] (0xc000fbc630) (0xc002f3cdc0) Stream removed, broadcasting: 1 I0423 14:07:49.430514 6 log.go:172] (0xc000fbc630) Go away received I0423 14:07:49.430595 6 log.go:172] (0xc000fbc630) (0xc002f3cdc0) Stream removed, broadcasting: 1 I0423 14:07:49.431391 6 log.go:172] (0xc000fbc630) (0xc0026cc280) Stream removed, broadcasting: 3 I0423 14:07:49.431428 6 log.go:172] (0xc000fbc630) (0xc0026cc320) Stream removed, broadcasting: 5 Apr 23 14:07:49.431: INFO: Found all expected endpoints: [netserver-0] Apr 23 14:07:49.434: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.151:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-4478 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 23 14:07:49.434: INFO: >>> kubeConfig: /root/.kube/config I0423 14:07:49.466843 6 log.go:172] (0xc000f6cf20) (0xc002bb6f00) Create stream I0423 14:07:49.466872 6 log.go:172] (0xc000f6cf20) (0xc002bb6f00) Stream added, broadcasting: 1 I0423 14:07:49.469473 6 log.go:172] (0xc000f6cf20) Reply frame received for 1 I0423 14:07:49.469504 6 log.go:172] (0xc000f6cf20) (0xc002eaa780) Create stream I0423 14:07:49.469523 6 log.go:172] (0xc000f6cf20) (0xc002eaa780) Stream added, broadcasting: 3 I0423 14:07:49.470615 6 log.go:172] (0xc000f6cf20) Reply frame received for 3 I0423 14:07:49.470690 6 log.go:172] (0xc000f6cf20) (0xc001b35a40) Create stream I0423 14:07:49.470723 6 log.go:172] (0xc000f6cf20) (0xc001b35a40) Stream added, broadcasting: 5 I0423 14:07:49.472232 6 log.go:172] (0xc000f6cf20) Reply frame received for 5 I0423 14:07:49.542856 6 log.go:172] (0xc000f6cf20) Data frame received for 3 I0423 14:07:49.542903 6 log.go:172] (0xc002eaa780) (3) Data frame handling I0423 14:07:49.542911 6 log.go:172] (0xc002eaa780) (3) Data frame sent I0423 14:07:49.542922 6 log.go:172] (0xc000f6cf20) Data frame received for 3 I0423 14:07:49.542929 6 log.go:172] (0xc002eaa780) (3) Data frame handling I0423 14:07:49.542963 6 log.go:172] (0xc000f6cf20) Data frame received for 5 I0423 14:07:49.542972 6 log.go:172] (0xc001b35a40) (5) Data frame handling I0423 14:07:49.544634 6 log.go:172] (0xc000f6cf20) Data frame received for 1 I0423 14:07:49.544657 6 log.go:172] (0xc002bb6f00) (1) Data frame handling I0423 14:07:49.544667 6 log.go:172] (0xc002bb6f00) (1) Data frame sent I0423 14:07:49.544686 6 log.go:172] (0xc000f6cf20) (0xc002bb6f00) Stream removed, broadcasting: 1 I0423 14:07:49.544702 6 log.go:172] (0xc000f6cf20) Go away received I0423 14:07:49.544807 6 log.go:172] (0xc000f6cf20) (0xc002bb6f00) Stream removed, broadcasting: 1 I0423 14:07:49.544839 6 log.go:172] (0xc000f6cf20) (0xc002eaa780) Stream removed, broadcasting: 3 I0423 14:07:49.544861 6 log.go:172] (0xc000f6cf20) (0xc001b35a40) Stream removed, broadcasting: 5 Apr 23 14:07:49.544: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:07:49.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-4478" for this suite. Apr 23 14:08:11.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:08:11.709: INFO: namespace pod-network-test-4478 deletion completed in 22.159647041s • [SLOW TEST:46.592 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:08:11.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-9de8ac07-0331-4246-95fc-589654d725bc STEP: Creating a pod to test consume secrets Apr 23 14:08:11.791: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-469910ff-8208-4717-997d-8f53ed133bf5" in namespace "projected-7037" to be "success or failure" Apr 23 14:08:11.793: INFO: Pod "pod-projected-secrets-469910ff-8208-4717-997d-8f53ed133bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 1.811831ms Apr 23 14:08:13.841: INFO: Pod "pod-projected-secrets-469910ff-8208-4717-997d-8f53ed133bf5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05008086s Apr 23 14:08:15.845: INFO: Pod "pod-projected-secrets-469910ff-8208-4717-997d-8f53ed133bf5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054713541s STEP: Saw pod success Apr 23 14:08:15.845: INFO: Pod "pod-projected-secrets-469910ff-8208-4717-997d-8f53ed133bf5" satisfied condition "success or failure" Apr 23 14:08:15.849: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-469910ff-8208-4717-997d-8f53ed133bf5 container projected-secret-volume-test: STEP: delete the pod Apr 23 14:08:15.895: INFO: Waiting for pod pod-projected-secrets-469910ff-8208-4717-997d-8f53ed133bf5 to disappear Apr 23 14:08:15.906: INFO: Pod pod-projected-secrets-469910ff-8208-4717-997d-8f53ed133bf5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:08:15.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7037" for this suite. Apr 23 14:08:21.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:08:22.039: INFO: namespace projected-7037 deletion completed in 6.128888863s • [SLOW TEST:10.329 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:08:22.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 23 14:08:26.133: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:08:26.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5951" for this suite. Apr 23 14:08:32.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:08:32.317: INFO: namespace container-runtime-5951 deletion completed in 6.142941579s • [SLOW TEST:10.278 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:08:32.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-8042 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8042 to expose endpoints map[] Apr 23 14:08:32.449: INFO: Get endpoints failed (20.805867ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 23 14:08:33.453: INFO: successfully validated that service endpoint-test2 in namespace services-8042 exposes endpoints map[] (1.024834671s elapsed) STEP: Creating pod pod1 in namespace services-8042 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8042 to expose endpoints map[pod1:[80]] Apr 23 14:08:36.527: INFO: successfully validated that service endpoint-test2 in namespace services-8042 exposes endpoints map[pod1:[80]] (3.066439371s elapsed) STEP: Creating pod pod2 in namespace services-8042 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8042 to expose endpoints map[pod1:[80] pod2:[80]] Apr 23 14:08:39.614: INFO: successfully validated that service endpoint-test2 in namespace services-8042 exposes endpoints map[pod1:[80] pod2:[80]] (3.082377702s elapsed) STEP: Deleting pod pod1 in namespace services-8042 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8042 to expose endpoints map[pod2:[80]] Apr 23 14:08:40.640: INFO: successfully validated that service endpoint-test2 in namespace services-8042 exposes endpoints map[pod2:[80]] (1.02098495s elapsed) STEP: Deleting pod pod2 in namespace services-8042 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8042 to expose endpoints map[] Apr 23 14:08:40.658: INFO: successfully validated that service endpoint-test2 in namespace services-8042 exposes endpoints map[] (12.050629ms elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:08:40.721: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8042" for this suite. Apr 23 14:09:02.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:09:02.845: INFO: namespace services-8042 deletion completed in 22.120024538s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:30.527 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:09:02.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 14:09:02.885: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6553' Apr 23 14:09:05.248: INFO: stderr: "" Apr 23 14:09:05.248: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 23 14:09:05.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6553' Apr 23 14:09:05.601: INFO: stderr: "" Apr 23 14:09:05.601: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 23 14:09:06.605: INFO: Selector matched 1 pods for map[app:redis] Apr 23 14:09:06.605: INFO: Found 0 / 1 Apr 23 14:09:07.605: INFO: Selector matched 1 pods for map[app:redis] Apr 23 14:09:07.605: INFO: Found 0 / 1 Apr 23 14:09:08.605: INFO: Selector matched 1 pods for map[app:redis] Apr 23 14:09:08.605: INFO: Found 1 / 1 Apr 23 14:09:08.605: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 23 14:09:08.608: INFO: Selector matched 1 pods for map[app:redis] Apr 23 14:09:08.608: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 23 14:09:08.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-wk6pl --namespace=kubectl-6553' Apr 23 14:09:08.719: INFO: stderr: "" Apr 23 14:09:08.719: INFO: stdout: "Name: redis-master-wk6pl\nNamespace: kubectl-6553\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Thu, 23 Apr 2020 14:09:05 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.204\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://af8b379bd8cc5578bd229ab91d6b24cebe9ef8df16ef987d0010faedd8e73bb6\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 23 Apr 2020 14:09:07 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-lgg4h (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-lgg4h:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-lgg4h\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 3s default-scheduler Successfully assigned kubectl-6553/redis-master-wk6pl to iruya-worker2\n Normal Pulled 2s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" Apr 23 14:09:08.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-6553' Apr 23 14:09:08.826: INFO: stderr: "" Apr 23 14:09:08.826: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6553\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 3s replication-controller Created pod: redis-master-wk6pl\n" Apr 23 14:09:08.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-6553' Apr 23 14:09:08.944: INFO: stderr: "" Apr 23 14:09:08.944: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6553\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.111.35.178\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.204:6379\nSession Affinity: None\nEvents: \n" Apr 23 14:09:08.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Apr 23 14:09:09.084: INFO: stderr: "" Apr 23 14:09:09.084: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 23 Apr 2020 14:09:07 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 23 Apr 2020 14:09:07 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 23 Apr 2020 14:09:07 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 23 Apr 2020 14:09:07 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 38d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 38d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 38d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 38d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 38d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 23 14:09:09.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6553' Apr 23 14:09:09.200: INFO: stderr: "" Apr 23 14:09:09.200: INFO: stdout: "Name: kubectl-6553\nLabels: e2e-framework=kubectl\n e2e-run=3574706b-c38b-47d5-b1f9-8cd6bffcd536\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:09:09.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6553" for this suite. Apr 23 14:09:31.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:09:31.289: INFO: namespace kubectl-6553 deletion completed in 22.085384241s • [SLOW TEST:28.444 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:09:31.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Apr 23 14:09:31.349: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4562' Apr 23 14:09:31.628: INFO: stderr: "" Apr 23 14:09:31.628: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Apr 23 14:09:32.632: INFO: Selector matched 1 pods for map[app:redis] Apr 23 14:09:32.632: INFO: Found 0 / 1 Apr 23 14:09:33.633: INFO: Selector matched 1 pods for map[app:redis] Apr 23 14:09:33.633: INFO: Found 0 / 1 Apr 23 14:09:34.633: INFO: Selector matched 1 pods for map[app:redis] Apr 23 14:09:34.633: INFO: Found 1 / 1 Apr 23 14:09:34.633: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 23 14:09:34.636: INFO: Selector matched 1 pods for map[app:redis] Apr 23 14:09:34.636: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Apr 23 14:09:34.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rkmt8 redis-master --namespace=kubectl-4562' Apr 23 14:09:34.752: INFO: stderr: "" Apr 23 14:09:34.752: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Apr 14:09:34.058 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Apr 14:09:34.058 # Server started, Redis version 3.2.12\n1:M 23 Apr 14:09:34.058 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Apr 14:09:34.058 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Apr 23 14:09:34.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rkmt8 redis-master --namespace=kubectl-4562 --tail=1' Apr 23 14:09:34.875: INFO: stderr: "" Apr 23 14:09:34.875: INFO: stdout: "1:M 23 Apr 14:09:34.058 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Apr 23 14:09:34.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rkmt8 redis-master --namespace=kubectl-4562 --limit-bytes=1' Apr 23 14:09:34.975: INFO: stderr: "" Apr 23 14:09:34.975: INFO: stdout: " " STEP: exposing timestamps Apr 23 14:09:34.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rkmt8 redis-master --namespace=kubectl-4562 --tail=1 --timestamps' Apr 23 14:09:35.073: INFO: stderr: "" Apr 23 14:09:35.073: INFO: stdout: "2020-04-23T14:09:34.058734894Z 1:M 23 Apr 14:09:34.058 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Apr 23 14:09:37.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rkmt8 redis-master --namespace=kubectl-4562 --since=1s' Apr 23 14:09:37.685: INFO: stderr: "" Apr 23 14:09:37.685: INFO: stdout: "" Apr 23 14:09:37.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-rkmt8 redis-master --namespace=kubectl-4562 --since=24h' Apr 23 14:09:37.787: INFO: stderr: "" Apr 23 14:09:37.787: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 23 Apr 14:09:34.058 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 23 Apr 14:09:34.058 # Server started, Redis version 3.2.12\n1:M 23 Apr 14:09:34.058 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 23 Apr 14:09:34.058 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Apr 23 14:09:37.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4562' Apr 23 14:09:37.898: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 14:09:37.898: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Apr 23 14:09:37.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-4562' Apr 23 14:09:37.990: INFO: stderr: "No resources found.\n" Apr 23 14:09:37.990: INFO: stdout: "" Apr 23 14:09:37.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-4562 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 23 14:09:38.215: INFO: stderr: "" Apr 23 14:09:38.215: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:09:38.215: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4562" for this suite. Apr 23 14:09:44.262: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:09:44.350: INFO: namespace kubectl-4562 deletion completed in 6.130187901s • [SLOW TEST:13.061 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:09:44.350: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 14:09:44.446: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 23 14:09:49.452: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 23 14:09:49.452: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 23 14:09:51.456: INFO: Creating deployment "test-rollover-deployment" Apr 23 14:09:51.465: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 23 14:09:53.471: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 23 14:09:53.477: INFO: Ensure that both replica sets have 1 created replica Apr 23 14:09:53.482: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 23 14:09:53.487: INFO: Updating deployment test-rollover-deployment Apr 23 14:09:53.487: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 23 14:09:55.512: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 23 14:09:55.518: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 23 14:09:55.524: INFO: all replica sets need to contain the pod-template-hash label Apr 23 14:09:55.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247793, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 14:09:57.530: INFO: all replica sets need to contain the pod-template-hash label Apr 23 14:09:57.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247793, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 14:09:59.532: INFO: all replica sets need to contain the pod-template-hash label Apr 23 14:09:59.532: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247798, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 14:10:01.535: INFO: all replica sets need to contain the pod-template-hash label Apr 23 14:10:01.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247798, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 14:10:03.531: INFO: all replica sets need to contain the pod-template-hash label Apr 23 14:10:03.531: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247798, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 14:10:05.552: INFO: all replica sets need to contain the pod-template-hash label Apr 23 14:10:05.552: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247798, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 14:10:07.544: INFO: all replica sets need to contain the pod-template-hash label Apr 23 14:10:07.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247798, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723247791, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 14:10:09.532: INFO: Apr 23 14:10:09.532: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 23 14:10:09.539: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-5154,SelfLink:/apis/apps/v1/namespaces/deployment-5154/deployments/test-rollover-deployment,UID:80ec97da-6e83-44d9-ab5d-dcff05fe2e6b,ResourceVersion:7011642,Generation:2,CreationTimestamp:2020-04-23 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-23 14:09:51 +0000 UTC 2020-04-23 14:09:51 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-23 14:10:09 +0000 UTC 2020-04-23 14:09:51 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 23 14:10:09.542: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-5154,SelfLink:/apis/apps/v1/namespaces/deployment-5154/replicasets/test-rollover-deployment-854595fc44,UID:b9d27f42-62c3-4098-8ab2-95d8ecb2de3b,ResourceVersion:7011631,Generation:2,CreationTimestamp:2020-04-23 14:09:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 80ec97da-6e83-44d9-ab5d-dcff05fe2e6b 0xc0032bcd37 0xc0032bcd38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 23 14:10:09.542: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 23 14:10:09.542: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-5154,SelfLink:/apis/apps/v1/namespaces/deployment-5154/replicasets/test-rollover-controller,UID:79b86673-b5ce-4b2f-8dee-3f9de966983c,ResourceVersion:7011640,Generation:2,CreationTimestamp:2020-04-23 14:09:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 80ec97da-6e83-44d9-ab5d-dcff05fe2e6b 0xc0032bcbb7 0xc0032bcbb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 23 14:10:09.542: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-5154,SelfLink:/apis/apps/v1/namespaces/deployment-5154/replicasets/test-rollover-deployment-9b8b997cf,UID:2df74df3-a649-463a-bf5d-6c4f32746722,ResourceVersion:7011593,Generation:2,CreationTimestamp:2020-04-23 14:09:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 80ec97da-6e83-44d9-ab5d-dcff05fe2e6b 0xc0032bce10 0xc0032bce11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 23 14:10:09.545: INFO: Pod "test-rollover-deployment-854595fc44-5pq72" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-5pq72,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-5154,SelfLink:/api/v1/namespaces/deployment-5154/pods/test-rollover-deployment-854595fc44-5pq72,UID:99ba2323-b0a2-42ad-889b-75f6be030b69,ResourceVersion:7011608,Generation:0,CreationTimestamp:2020-04-23 14:09:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 b9d27f42-62c3-4098-8ab2-95d8ecb2de3b 0xc0021ae207 0xc0021ae208}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-2p9r7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-2p9r7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-2p9r7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0021ae280} {node.kubernetes.io/unreachable Exists NoExecute 0xc0021ae2a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:09:53 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:09:58 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:09:58 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:09:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.205,StartTime:2020-04-23 14:09:53 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-23 14:09:57 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://bea6f51f4164a774746879e03caf0ebc4c557f8cb7f9aeefb3b0c04397ce97ae}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:10:09.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5154" for this suite. Apr 23 14:10:15.585: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:10:15.675: INFO: namespace deployment-5154 deletion completed in 6.127278823s • [SLOW TEST:31.325 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:10:15.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 14:10:15.761: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e06b7e26-b548-44db-a36d-cee9fc45bb87" in namespace "downward-api-1485" to be "success or failure" Apr 23 14:10:15.764: INFO: Pod "downwardapi-volume-e06b7e26-b548-44db-a36d-cee9fc45bb87": Phase="Pending", Reason="", readiness=false. Elapsed: 3.241692ms Apr 23 14:10:17.769: INFO: Pod "downwardapi-volume-e06b7e26-b548-44db-a36d-cee9fc45bb87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007615933s Apr 23 14:10:19.773: INFO: Pod "downwardapi-volume-e06b7e26-b548-44db-a36d-cee9fc45bb87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011583545s STEP: Saw pod success Apr 23 14:10:19.773: INFO: Pod "downwardapi-volume-e06b7e26-b548-44db-a36d-cee9fc45bb87" satisfied condition "success or failure" Apr 23 14:10:19.776: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e06b7e26-b548-44db-a36d-cee9fc45bb87 container client-container: STEP: delete the pod Apr 23 14:10:19.830: INFO: Waiting for pod downwardapi-volume-e06b7e26-b548-44db-a36d-cee9fc45bb87 to disappear Apr 23 14:10:19.843: INFO: Pod downwardapi-volume-e06b7e26-b548-44db-a36d-cee9fc45bb87 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:10:19.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1485" for this suite. Apr 23 14:10:25.859: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:10:25.930: INFO: namespace downward-api-1485 deletion completed in 6.084469076s • [SLOW TEST:10.255 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:10:25.930: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 14:10:26.007: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ae5e7f11-a6a2-48a4-8898-a184026a28b1" in namespace "downward-api-1403" to be "success or failure" Apr 23 14:10:26.026: INFO: Pod "downwardapi-volume-ae5e7f11-a6a2-48a4-8898-a184026a28b1": Phase="Pending", Reason="", readiness=false. Elapsed: 19.628989ms Apr 23 14:10:28.031: INFO: Pod "downwardapi-volume-ae5e7f11-a6a2-48a4-8898-a184026a28b1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024272012s Apr 23 14:10:30.035: INFO: Pod "downwardapi-volume-ae5e7f11-a6a2-48a4-8898-a184026a28b1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028603182s Apr 23 14:10:32.040: INFO: Pod "downwardapi-volume-ae5e7f11-a6a2-48a4-8898-a184026a28b1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.033103778s STEP: Saw pod success Apr 23 14:10:32.040: INFO: Pod "downwardapi-volume-ae5e7f11-a6a2-48a4-8898-a184026a28b1" satisfied condition "success or failure" Apr 23 14:10:32.043: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-ae5e7f11-a6a2-48a4-8898-a184026a28b1 container client-container: STEP: delete the pod Apr 23 14:10:32.076: INFO: Waiting for pod downwardapi-volume-ae5e7f11-a6a2-48a4-8898-a184026a28b1 to disappear Apr 23 14:10:32.117: INFO: Pod downwardapi-volume-ae5e7f11-a6a2-48a4-8898-a184026a28b1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:10:32.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1403" for this suite. Apr 23 14:10:38.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:10:38.219: INFO: namespace downward-api-1403 deletion completed in 6.098223933s • [SLOW TEST:12.289 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:10:38.220: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Apr 23 14:10:38.821: INFO: created pod pod-service-account-defaultsa Apr 23 14:10:38.821: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 23 14:10:38.829: INFO: created pod pod-service-account-mountsa Apr 23 14:10:38.829: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 23 14:10:38.835: INFO: created pod pod-service-account-nomountsa Apr 23 14:10:38.835: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 23 14:10:38.865: INFO: created pod pod-service-account-defaultsa-mountspec Apr 23 14:10:38.865: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 23 14:10:38.888: INFO: created pod pod-service-account-mountsa-mountspec Apr 23 14:10:38.888: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 23 14:10:38.901: INFO: created pod pod-service-account-nomountsa-mountspec Apr 23 14:10:38.901: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 23 14:10:38.945: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 23 14:10:38.945: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 23 14:10:38.961: INFO: created pod pod-service-account-mountsa-nomountspec Apr 23 14:10:38.961: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 23 14:10:39.013: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 23 14:10:39.013: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:10:39.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-448" for this suite. Apr 23 14:11:05.218: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:11:05.290: INFO: namespace svcaccounts-448 deletion completed in 26.15837333s • [SLOW TEST:27.071 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:11:05.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-4408/configmap-test-505f506b-1e35-42fb-86ab-af373fab65cf STEP: Creating a pod to test consume configMaps Apr 23 14:11:05.372: INFO: Waiting up to 5m0s for pod "pod-configmaps-b665f2ff-86c3-4cfb-91b3-1e917e21658a" in namespace "configmap-4408" to be "success or failure" Apr 23 14:11:05.386: INFO: Pod "pod-configmaps-b665f2ff-86c3-4cfb-91b3-1e917e21658a": Phase="Pending", Reason="", readiness=false. Elapsed: 13.376746ms Apr 23 14:11:07.389: INFO: Pod "pod-configmaps-b665f2ff-86c3-4cfb-91b3-1e917e21658a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016941929s Apr 23 14:11:09.393: INFO: Pod "pod-configmaps-b665f2ff-86c3-4cfb-91b3-1e917e21658a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021025313s STEP: Saw pod success Apr 23 14:11:09.393: INFO: Pod "pod-configmaps-b665f2ff-86c3-4cfb-91b3-1e917e21658a" satisfied condition "success or failure" Apr 23 14:11:09.397: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b665f2ff-86c3-4cfb-91b3-1e917e21658a container env-test: STEP: delete the pod Apr 23 14:11:09.419: INFO: Waiting for pod pod-configmaps-b665f2ff-86c3-4cfb-91b3-1e917e21658a to disappear Apr 23 14:11:09.422: INFO: Pod pod-configmaps-b665f2ff-86c3-4cfb-91b3-1e917e21658a no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:11:09.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4408" for this suite. Apr 23 14:11:15.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:11:15.550: INFO: namespace configmap-4408 deletion completed in 6.125156714s • [SLOW TEST:10.259 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:11:15.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-debdaa24-ce27-4a2c-950c-661fef872add STEP: Creating a pod to test consume configMaps Apr 23 14:11:15.625: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b722188a-b721-437f-8792-0fb4d0549ea0" in namespace "projected-3028" to be "success or failure" Apr 23 14:11:15.642: INFO: Pod "pod-projected-configmaps-b722188a-b721-437f-8792-0fb4d0549ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 16.846791ms Apr 23 14:11:17.646: INFO: Pod "pod-projected-configmaps-b722188a-b721-437f-8792-0fb4d0549ea0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020165583s Apr 23 14:11:19.650: INFO: Pod "pod-projected-configmaps-b722188a-b721-437f-8792-0fb4d0549ea0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024960645s STEP: Saw pod success Apr 23 14:11:19.650: INFO: Pod "pod-projected-configmaps-b722188a-b721-437f-8792-0fb4d0549ea0" satisfied condition "success or failure" Apr 23 14:11:19.654: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-b722188a-b721-437f-8792-0fb4d0549ea0 container projected-configmap-volume-test: STEP: delete the pod Apr 23 14:11:19.694: INFO: Waiting for pod pod-projected-configmaps-b722188a-b721-437f-8792-0fb4d0549ea0 to disappear Apr 23 14:11:19.710: INFO: Pod pod-projected-configmaps-b722188a-b721-437f-8792-0fb4d0549ea0 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:11:19.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3028" for this suite. Apr 23 14:11:25.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:11:25.815: INFO: namespace projected-3028 deletion completed in 6.103105221s • [SLOW TEST:10.265 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:11:25.816: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 23 14:11:25.892: INFO: Waiting up to 5m0s for pod "pod-1b5c7b16-423c-45b7-a693-2577e04e1c55" in namespace "emptydir-3392" to be "success or failure" Apr 23 14:11:25.910: INFO: Pod "pod-1b5c7b16-423c-45b7-a693-2577e04e1c55": Phase="Pending", Reason="", readiness=false. Elapsed: 17.599365ms Apr 23 14:11:27.914: INFO: Pod "pod-1b5c7b16-423c-45b7-a693-2577e04e1c55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021075866s Apr 23 14:11:29.918: INFO: Pod "pod-1b5c7b16-423c-45b7-a693-2577e04e1c55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025294169s STEP: Saw pod success Apr 23 14:11:29.918: INFO: Pod "pod-1b5c7b16-423c-45b7-a693-2577e04e1c55" satisfied condition "success or failure" Apr 23 14:11:29.921: INFO: Trying to get logs from node iruya-worker2 pod pod-1b5c7b16-423c-45b7-a693-2577e04e1c55 container test-container: STEP: delete the pod Apr 23 14:11:29.958: INFO: Waiting for pod pod-1b5c7b16-423c-45b7-a693-2577e04e1c55 to disappear Apr 23 14:11:30.006: INFO: Pod pod-1b5c7b16-423c-45b7-a693-2577e04e1c55 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:11:30.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3392" for this suite. Apr 23 14:11:36.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:11:36.094: INFO: namespace emptydir-3392 deletion completed in 6.083794556s • [SLOW TEST:10.278 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:11:36.095: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 14:11:36.246: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"d5bee251-8856-4a4f-9f30-1afa5f625f8e", Controller:(*bool)(0xc0030a6272), BlockOwnerDeletion:(*bool)(0xc0030a6273)}} Apr 23 14:11:36.288: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"6a1025fd-919a-43c7-b119-9fe962836651", Controller:(*bool)(0xc0030a24b2), BlockOwnerDeletion:(*bool)(0xc0030a24b3)}} Apr 23 14:11:36.306: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"10915325-2f69-43e3-8d33-01bedebed52a", Controller:(*bool)(0xc002552dda), BlockOwnerDeletion:(*bool)(0xc002552ddb)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:11:41.364: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4570" for this suite. Apr 23 14:11:47.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:11:47.454: INFO: namespace gc-4570 deletion completed in 6.086462097s • [SLOW TEST:11.359 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:11:47.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-8c165ba4-95a5-41dd-91d5-7c1dce0f5f42 STEP: Creating a pod to test consume secrets Apr 23 14:11:47.541: INFO: Waiting up to 5m0s for pod "pod-secrets-2798769f-a851-4760-8da0-962b8d059a78" in namespace "secrets-9597" to be "success or failure" Apr 23 14:11:47.558: INFO: Pod "pod-secrets-2798769f-a851-4760-8da0-962b8d059a78": Phase="Pending", Reason="", readiness=false. Elapsed: 16.907178ms Apr 23 14:11:49.562: INFO: Pod "pod-secrets-2798769f-a851-4760-8da0-962b8d059a78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02091128s Apr 23 14:11:51.567: INFO: Pod "pod-secrets-2798769f-a851-4760-8da0-962b8d059a78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025400125s STEP: Saw pod success Apr 23 14:11:51.567: INFO: Pod "pod-secrets-2798769f-a851-4760-8da0-962b8d059a78" satisfied condition "success or failure" Apr 23 14:11:51.570: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-2798769f-a851-4760-8da0-962b8d059a78 container secret-volume-test: STEP: delete the pod Apr 23 14:11:51.586: INFO: Waiting for pod pod-secrets-2798769f-a851-4760-8da0-962b8d059a78 to disappear Apr 23 14:11:51.591: INFO: Pod pod-secrets-2798769f-a851-4760-8da0-962b8d059a78 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:11:51.591: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9597" for this suite. Apr 23 14:11:57.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:11:57.703: INFO: namespace secrets-9597 deletion completed in 6.109770668s • [SLOW TEST:10.249 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:11:57.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-2573, will wait for the garbage collector to delete the pods Apr 23 14:12:01.838: INFO: Deleting Job.batch foo took: 7.321162ms Apr 23 14:12:02.139: INFO: Terminating Job.batch foo pods took: 300.296732ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:12:42.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-2573" for this suite. Apr 23 14:12:48.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:12:48.343: INFO: namespace job-2573 deletion completed in 6.096564224s • [SLOW TEST:50.639 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:12:48.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 23 14:12:55.988: INFO: 0 pods remaining Apr 23 14:12:55.989: INFO: 0 pods has nil DeletionTimestamp Apr 23 14:12:55.989: INFO: STEP: Gathering metrics W0423 14:12:57.017463 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 23 14:12:57.017: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:12:57.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5827" for this suite. Apr 23 14:13:03.303: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:13:03.385: INFO: namespace gc-5827 deletion completed in 6.282167148s • [SLOW TEST:15.042 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:13:03.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 23 14:13:03.428: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-1318' Apr 23 14:13:03.532: INFO: stderr: "" Apr 23 14:13:03.532: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Apr 23 14:13:08.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-1318 -o json' Apr 23 14:13:08.719: INFO: stderr: "" Apr 23 14:13:08.719: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-23T14:13:03Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-1318\",\n \"resourceVersion\": \"7012544\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1318/pods/e2e-test-nginx-pod\",\n \"uid\": \"5f8d10b6-6c4a-4f47-9cfb-2a51aceaddcf\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-zpt8z\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-zpt8z\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-zpt8z\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-23T14:13:03Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-23T14:13:06Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-23T14:13:06Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-23T14:13:03Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://a93c666d01f0b25562eab5b9f7a955ecf173ace5d164c792fd9042e3603be21f\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-23T14:13:05Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.171\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-23T14:13:03Z\"\n }\n}\n" STEP: replace the image in the pod Apr 23 14:13:08.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1318' Apr 23 14:13:09.044: INFO: stderr: "" Apr 23 14:13:09.044: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Apr 23 14:13:09.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1318' Apr 23 14:13:12.691: INFO: stderr: "" Apr 23 14:13:12.691: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:13:12.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1318" for this suite. Apr 23 14:13:18.705: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:13:18.801: INFO: namespace kubectl-1318 deletion completed in 6.104580267s • [SLOW TEST:15.416 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:13:18.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-1150 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1150 to expose endpoints map[] Apr 23 14:13:18.916: INFO: Get endpoints failed (32.701361ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 23 14:13:19.920: INFO: successfully validated that service multi-endpoint-test in namespace services-1150 exposes endpoints map[] (1.036356593s elapsed) STEP: Creating pod pod1 in namespace services-1150 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1150 to expose endpoints map[pod1:[100]] Apr 23 14:13:23.016: INFO: successfully validated that service multi-endpoint-test in namespace services-1150 exposes endpoints map[pod1:[100]] (3.088848135s elapsed) STEP: Creating pod pod2 in namespace services-1150 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1150 to expose endpoints map[pod1:[100] pod2:[101]] Apr 23 14:13:26.111: INFO: successfully validated that service multi-endpoint-test in namespace services-1150 exposes endpoints map[pod1:[100] pod2:[101]] (3.090737457s elapsed) STEP: Deleting pod pod1 in namespace services-1150 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1150 to expose endpoints map[pod2:[101]] Apr 23 14:13:27.162: INFO: successfully validated that service multi-endpoint-test in namespace services-1150 exposes endpoints map[pod2:[101]] (1.046148254s elapsed) STEP: Deleting pod pod2 in namespace services-1150 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1150 to expose endpoints map[] Apr 23 14:13:28.203: INFO: successfully validated that service multi-endpoint-test in namespace services-1150 exposes endpoints map[] (1.035722317s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:13:28.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1150" for this suite. Apr 23 14:13:34.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:13:34.523: INFO: namespace services-1150 deletion completed in 6.078756163s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:15.721 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:13:34.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 23 14:13:34.564: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 23 14:13:34.590: INFO: Waiting for terminating namespaces to be deleted... Apr 23 14:13:34.592: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 23 14:13:34.597: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 23 14:13:34.597: INFO: Container kube-proxy ready: true, restart count 0 Apr 23 14:13:34.597: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 23 14:13:34.597: INFO: Container kindnet-cni ready: true, restart count 0 Apr 23 14:13:34.597: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 23 14:13:34.603: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 23 14:13:34.603: INFO: Container kindnet-cni ready: true, restart count 0 Apr 23 14:13:34.603: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 23 14:13:34.603: INFO: Container kube-proxy ready: true, restart count 0 Apr 23 14:13:34.603: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 23 14:13:34.603: INFO: Container coredns ready: true, restart count 0 Apr 23 14:13:34.603: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 23 14:13:34.603: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Apr 23 14:13:34.658: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Apr 23 14:13:34.658: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Apr 23 14:13:34.658: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Apr 23 14:13:34.658: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Apr 23 14:13:34.658: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Apr 23 14:13:34.658: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-5eaf5250-2de7-48da-aee0-3930ffd22c02.1608785e35bd2dd9], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5025/filler-pod-5eaf5250-2de7-48da-aee0-3930ffd22c02 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-5eaf5250-2de7-48da-aee0-3930ffd22c02.1608785e80c15f0b], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5eaf5250-2de7-48da-aee0-3930ffd22c02.1608785ec284ffcc], Reason = [Created], Message = [Created container filler-pod-5eaf5250-2de7-48da-aee0-3930ffd22c02] STEP: Considering event: Type = [Normal], Name = [filler-pod-5eaf5250-2de7-48da-aee0-3930ffd22c02.1608785edc369708], Reason = [Started], Message = [Started container filler-pod-5eaf5250-2de7-48da-aee0-3930ffd22c02] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d27423d-ab57-4941-b484-5689f4ffe9d8.1608785e38945a80], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5025/filler-pod-8d27423d-ab57-4941-b484-5689f4ffe9d8 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d27423d-ab57-4941-b484-5689f4ffe9d8.1608785eb1b4b547], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d27423d-ab57-4941-b484-5689f4ffe9d8.1608785ee2c5e4b4], Reason = [Created], Message = [Created container filler-pod-8d27423d-ab57-4941-b484-5689f4ffe9d8] STEP: Considering event: Type = [Normal], Name = [filler-pod-8d27423d-ab57-4941-b484-5689f4ffe9d8.1608785ef3084561], Reason = [Started], Message = [Started container filler-pod-8d27423d-ab57-4941-b484-5689f4ffe9d8] STEP: Considering event: Type = [Warning], Name = [additional-pod.1608785f2934e9c8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:13:39.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5025" for this suite. Apr 23 14:13:45.825: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:13:45.896: INFO: namespace sched-pred-5025 deletion completed in 6.08179206s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:11.373 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:13:45.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-137d3031-ad04-406d-8c1c-5204e32c1072 STEP: Creating a pod to test consume secrets Apr 23 14:13:46.034: INFO: Waiting up to 5m0s for pod "pod-secrets-e8282638-bc8c-461d-8d02-948315fa91a8" in namespace "secrets-1483" to be "success or failure" Apr 23 14:13:46.038: INFO: Pod "pod-secrets-e8282638-bc8c-461d-8d02-948315fa91a8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.94033ms Apr 23 14:13:48.041: INFO: Pod "pod-secrets-e8282638-bc8c-461d-8d02-948315fa91a8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006998816s Apr 23 14:13:50.056: INFO: Pod "pod-secrets-e8282638-bc8c-461d-8d02-948315fa91a8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021605298s STEP: Saw pod success Apr 23 14:13:50.056: INFO: Pod "pod-secrets-e8282638-bc8c-461d-8d02-948315fa91a8" satisfied condition "success or failure" Apr 23 14:13:50.059: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-e8282638-bc8c-461d-8d02-948315fa91a8 container secret-volume-test: STEP: delete the pod Apr 23 14:13:50.110: INFO: Waiting for pod pod-secrets-e8282638-bc8c-461d-8d02-948315fa91a8 to disappear Apr 23 14:13:50.114: INFO: Pod pod-secrets-e8282638-bc8c-461d-8d02-948315fa91a8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:13:50.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1483" for this suite. Apr 23 14:13:56.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:13:56.218: INFO: namespace secrets-1483 deletion completed in 6.100113904s STEP: Destroying namespace "secret-namespace-7003" for this suite. Apr 23 14:14:02.345: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:14:02.416: INFO: namespace secret-namespace-7003 deletion completed in 6.198319477s • [SLOW TEST:16.520 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:14:02.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 23 14:14:07.041: INFO: Successfully updated pod "annotationupdateeb367ce2-1a0c-439f-8bac-596d6739753d" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:14:11.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6513" for this suite. Apr 23 14:14:33.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:14:33.151: INFO: namespace projected-6513 deletion completed in 22.084918756s • [SLOW TEST:30.734 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:14:33.152: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Apr 23 14:14:33.219: INFO: Waiting up to 5m0s for pod "var-expansion-33a6b02e-34bd-4105-9dbc-6bccd75c968e" in namespace "var-expansion-3063" to be "success or failure" Apr 23 14:14:33.225: INFO: Pod "var-expansion-33a6b02e-34bd-4105-9dbc-6bccd75c968e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.510966ms Apr 23 14:14:35.295: INFO: Pod "var-expansion-33a6b02e-34bd-4105-9dbc-6bccd75c968e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075874469s Apr 23 14:14:37.299: INFO: Pod "var-expansion-33a6b02e-34bd-4105-9dbc-6bccd75c968e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079985965s STEP: Saw pod success Apr 23 14:14:37.299: INFO: Pod "var-expansion-33a6b02e-34bd-4105-9dbc-6bccd75c968e" satisfied condition "success or failure" Apr 23 14:14:37.302: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-33a6b02e-34bd-4105-9dbc-6bccd75c968e container dapi-container: STEP: delete the pod Apr 23 14:14:37.320: INFO: Waiting for pod var-expansion-33a6b02e-34bd-4105-9dbc-6bccd75c968e to disappear Apr 23 14:14:37.325: INFO: Pod var-expansion-33a6b02e-34bd-4105-9dbc-6bccd75c968e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:14:37.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3063" for this suite. Apr 23 14:14:43.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:14:43.419: INFO: namespace var-expansion-3063 deletion completed in 6.091431557s • [SLOW TEST:10.268 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:14:43.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-91661585-dee9-4272-ba78-0c9069f4a4a3 in namespace container-probe-1197 Apr 23 14:14:49.485: INFO: Started pod liveness-91661585-dee9-4272-ba78-0c9069f4a4a3 in namespace container-probe-1197 STEP: checking the pod's current state and verifying that restartCount is present Apr 23 14:14:49.488: INFO: Initial restart count of pod liveness-91661585-dee9-4272-ba78-0c9069f4a4a3 is 0 Apr 23 14:15:11.537: INFO: Restart count of pod container-probe-1197/liveness-91661585-dee9-4272-ba78-0c9069f4a4a3 is now 1 (22.049312979s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:15:11.556: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1197" for this suite. Apr 23 14:15:17.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:15:17.662: INFO: namespace container-probe-1197 deletion completed in 6.09531436s • [SLOW TEST:34.243 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:15:17.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7283 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-7283 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7283 Apr 23 14:15:17.728: INFO: Found 0 stateful pods, waiting for 1 Apr 23 14:15:27.733: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 23 14:15:27.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 23 14:15:27.964: INFO: stderr: "I0423 14:15:27.864461 2624 log.go:172] (0xc00013adc0) (0xc0006b8820) Create stream\nI0423 14:15:27.864535 2624 log.go:172] (0xc00013adc0) (0xc0006b8820) Stream added, broadcasting: 1\nI0423 14:15:27.867489 2624 log.go:172] (0xc00013adc0) Reply frame received for 1\nI0423 14:15:27.867527 2624 log.go:172] (0xc00013adc0) (0xc000958000) Create stream\nI0423 14:15:27.867545 2624 log.go:172] (0xc00013adc0) (0xc000958000) Stream added, broadcasting: 3\nI0423 14:15:27.868544 2624 log.go:172] (0xc00013adc0) Reply frame received for 3\nI0423 14:15:27.868593 2624 log.go:172] (0xc00013adc0) (0xc00099e000) Create stream\nI0423 14:15:27.868615 2624 log.go:172] (0xc00013adc0) (0xc00099e000) Stream added, broadcasting: 5\nI0423 14:15:27.869645 2624 log.go:172] (0xc00013adc0) Reply frame received for 5\nI0423 14:15:27.926812 2624 log.go:172] (0xc00013adc0) Data frame received for 5\nI0423 14:15:27.926839 2624 log.go:172] (0xc00099e000) (5) Data frame handling\nI0423 14:15:27.926890 2624 log.go:172] (0xc00099e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0423 14:15:27.955245 2624 log.go:172] (0xc00013adc0) Data frame received for 3\nI0423 14:15:27.955294 2624 log.go:172] (0xc000958000) (3) Data frame handling\nI0423 14:15:27.955312 2624 log.go:172] (0xc000958000) (3) Data frame sent\nI0423 14:15:27.955323 2624 log.go:172] (0xc00013adc0) Data frame received for 3\nI0423 14:15:27.955343 2624 log.go:172] (0xc000958000) (3) Data frame handling\nI0423 14:15:27.955772 2624 log.go:172] (0xc00013adc0) Data frame received for 5\nI0423 14:15:27.955796 2624 log.go:172] (0xc00099e000) (5) Data frame handling\nI0423 14:15:27.957620 2624 log.go:172] (0xc00013adc0) Data frame received for 1\nI0423 14:15:27.957648 2624 log.go:172] (0xc0006b8820) (1) Data frame handling\nI0423 14:15:27.957661 2624 log.go:172] (0xc0006b8820) (1) Data frame sent\nI0423 14:15:27.957681 2624 log.go:172] (0xc00013adc0) (0xc0006b8820) Stream removed, broadcasting: 1\nI0423 14:15:27.957716 2624 log.go:172] (0xc00013adc0) Go away received\nI0423 14:15:27.958192 2624 log.go:172] (0xc00013adc0) (0xc0006b8820) Stream removed, broadcasting: 1\nI0423 14:15:27.958219 2624 log.go:172] (0xc00013adc0) (0xc000958000) Stream removed, broadcasting: 3\nI0423 14:15:27.958236 2624 log.go:172] (0xc00013adc0) (0xc00099e000) Stream removed, broadcasting: 5\n" Apr 23 14:15:27.964: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 23 14:15:27.964: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 23 14:15:27.967: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 23 14:15:37.972: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 23 14:15:37.972: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 14:15:37.986: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 14:15:37.986: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:28 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC }] Apr 23 14:15:37.986: INFO: Apr 23 14:15:37.986: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 23 14:15:38.991: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.996339934s Apr 23 14:15:39.995: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.991751662s Apr 23 14:15:41.361: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.987235117s Apr 23 14:15:42.366: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.620974573s Apr 23 14:15:43.370: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.616358311s Apr 23 14:15:44.374: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.612077234s Apr 23 14:15:45.379: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.607975903s Apr 23 14:15:46.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.603032178s Apr 23 14:15:47.388: INFO: Verifying statefulset ss doesn't scale past 3 for another 598.194263ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7283 Apr 23 14:15:48.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:15:48.646: INFO: stderr: "I0423 14:15:48.534898 2643 log.go:172] (0xc000116000) (0xc0001fc280) Create stream\nI0423 14:15:48.534962 2643 log.go:172] (0xc000116000) (0xc0001fc280) Stream added, broadcasting: 1\nI0423 14:15:48.537236 2643 log.go:172] (0xc000116000) Reply frame received for 1\nI0423 14:15:48.537300 2643 log.go:172] (0xc000116000) (0xc000200000) Create stream\nI0423 14:15:48.537318 2643 log.go:172] (0xc000116000) (0xc000200000) Stream added, broadcasting: 3\nI0423 14:15:48.538578 2643 log.go:172] (0xc000116000) Reply frame received for 3\nI0423 14:15:48.538617 2643 log.go:172] (0xc000116000) (0xc00022a000) Create stream\nI0423 14:15:48.538629 2643 log.go:172] (0xc000116000) (0xc00022a000) Stream added, broadcasting: 5\nI0423 14:15:48.539668 2643 log.go:172] (0xc000116000) Reply frame received for 5\nI0423 14:15:48.638225 2643 log.go:172] (0xc000116000) Data frame received for 3\nI0423 14:15:48.638246 2643 log.go:172] (0xc000200000) (3) Data frame handling\nI0423 14:15:48.638254 2643 log.go:172] (0xc000200000) (3) Data frame sent\nI0423 14:15:48.638504 2643 log.go:172] (0xc000116000) Data frame received for 3\nI0423 14:15:48.638546 2643 log.go:172] (0xc000200000) (3) Data frame handling\nI0423 14:15:48.638580 2643 log.go:172] (0xc000116000) Data frame received for 5\nI0423 14:15:48.638594 2643 log.go:172] (0xc00022a000) (5) Data frame handling\nI0423 14:15:48.638607 2643 log.go:172] (0xc00022a000) (5) Data frame sent\nI0423 14:15:48.638622 2643 log.go:172] (0xc000116000) Data frame received for 5\nI0423 14:15:48.638638 2643 log.go:172] (0xc00022a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0423 14:15:48.640330 2643 log.go:172] (0xc000116000) Data frame received for 1\nI0423 14:15:48.640358 2643 log.go:172] (0xc0001fc280) (1) Data frame handling\nI0423 14:15:48.640369 2643 log.go:172] (0xc0001fc280) (1) Data frame sent\nI0423 14:15:48.640380 2643 log.go:172] (0xc000116000) (0xc0001fc280) Stream removed, broadcasting: 1\nI0423 14:15:48.640395 2643 log.go:172] (0xc000116000) Go away received\nI0423 14:15:48.640935 2643 log.go:172] (0xc000116000) (0xc0001fc280) Stream removed, broadcasting: 1\nI0423 14:15:48.640957 2643 log.go:172] (0xc000116000) (0xc000200000) Stream removed, broadcasting: 3\nI0423 14:15:48.640969 2643 log.go:172] (0xc000116000) (0xc00022a000) Stream removed, broadcasting: 5\n" Apr 23 14:15:48.646: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 23 14:15:48.646: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 23 14:15:48.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:15:48.865: INFO: stderr: "I0423 14:15:48.781877 2662 log.go:172] (0xc0009c4420) (0xc0009e8640) Create stream\nI0423 14:15:48.781940 2662 log.go:172] (0xc0009c4420) (0xc0009e8640) Stream added, broadcasting: 1\nI0423 14:15:48.788441 2662 log.go:172] (0xc0009c4420) Reply frame received for 1\nI0423 14:15:48.788515 2662 log.go:172] (0xc0009c4420) (0xc0009e86e0) Create stream\nI0423 14:15:48.788542 2662 log.go:172] (0xc0009c4420) (0xc0009e86e0) Stream added, broadcasting: 3\nI0423 14:15:48.791002 2662 log.go:172] (0xc0009c4420) Reply frame received for 3\nI0423 14:15:48.791316 2662 log.go:172] (0xc0009c4420) (0xc00092c000) Create stream\nI0423 14:15:48.791343 2662 log.go:172] (0xc0009c4420) (0xc00092c000) Stream added, broadcasting: 5\nI0423 14:15:48.793426 2662 log.go:172] (0xc0009c4420) Reply frame received for 5\nI0423 14:15:48.857905 2662 log.go:172] (0xc0009c4420) Data frame received for 3\nI0423 14:15:48.857940 2662 log.go:172] (0xc0009e86e0) (3) Data frame handling\nI0423 14:15:48.857954 2662 log.go:172] (0xc0009e86e0) (3) Data frame sent\nI0423 14:15:48.857965 2662 log.go:172] (0xc0009c4420) Data frame received for 3\nI0423 14:15:48.857974 2662 log.go:172] (0xc0009e86e0) (3) Data frame handling\nI0423 14:15:48.858033 2662 log.go:172] (0xc0009c4420) Data frame received for 5\nI0423 14:15:48.858064 2662 log.go:172] (0xc00092c000) (5) Data frame handling\nI0423 14:15:48.858087 2662 log.go:172] (0xc00092c000) (5) Data frame sent\nI0423 14:15:48.858117 2662 log.go:172] (0xc0009c4420) Data frame received for 5\nI0423 14:15:48.858143 2662 log.go:172] (0xc00092c000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0423 14:15:48.859327 2662 log.go:172] (0xc0009c4420) Data frame received for 1\nI0423 14:15:48.859349 2662 log.go:172] (0xc0009e8640) (1) Data frame handling\nI0423 14:15:48.859366 2662 log.go:172] (0xc0009e8640) (1) Data frame sent\nI0423 14:15:48.859407 2662 log.go:172] (0xc0009c4420) (0xc0009e8640) Stream removed, broadcasting: 1\nI0423 14:15:48.859532 2662 log.go:172] (0xc0009c4420) Go away received\nI0423 14:15:48.859939 2662 log.go:172] (0xc0009c4420) (0xc0009e8640) Stream removed, broadcasting: 1\nI0423 14:15:48.859966 2662 log.go:172] (0xc0009c4420) (0xc0009e86e0) Stream removed, broadcasting: 3\nI0423 14:15:48.859978 2662 log.go:172] (0xc0009c4420) (0xc00092c000) Stream removed, broadcasting: 5\n" Apr 23 14:15:48.865: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 23 14:15:48.865: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 23 14:15:48.865: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:15:49.071: INFO: stderr: "I0423 14:15:48.994458 2683 log.go:172] (0xc000130fd0) (0xc0006cc960) Create stream\nI0423 14:15:48.994518 2683 log.go:172] (0xc000130fd0) (0xc0006cc960) Stream added, broadcasting: 1\nI0423 14:15:48.998487 2683 log.go:172] (0xc000130fd0) Reply frame received for 1\nI0423 14:15:48.998529 2683 log.go:172] (0xc000130fd0) (0xc0006cc000) Create stream\nI0423 14:15:48.998551 2683 log.go:172] (0xc000130fd0) (0xc0006cc000) Stream added, broadcasting: 3\nI0423 14:15:48.999473 2683 log.go:172] (0xc000130fd0) Reply frame received for 3\nI0423 14:15:48.999529 2683 log.go:172] (0xc000130fd0) (0xc0006241e0) Create stream\nI0423 14:15:48.999546 2683 log.go:172] (0xc000130fd0) (0xc0006241e0) Stream added, broadcasting: 5\nI0423 14:15:49.000438 2683 log.go:172] (0xc000130fd0) Reply frame received for 5\nI0423 14:15:49.064192 2683 log.go:172] (0xc000130fd0) Data frame received for 3\nI0423 14:15:49.064258 2683 log.go:172] (0xc0006cc000) (3) Data frame handling\nI0423 14:15:49.064290 2683 log.go:172] (0xc0006cc000) (3) Data frame sent\nI0423 14:15:49.064328 2683 log.go:172] (0xc000130fd0) Data frame received for 3\nI0423 14:15:49.064345 2683 log.go:172] (0xc0006cc000) (3) Data frame handling\nI0423 14:15:49.064374 2683 log.go:172] (0xc000130fd0) Data frame received for 5\nI0423 14:15:49.064399 2683 log.go:172] (0xc0006241e0) (5) Data frame handling\nI0423 14:15:49.064423 2683 log.go:172] (0xc0006241e0) (5) Data frame sent\nI0423 14:15:49.064436 2683 log.go:172] (0xc000130fd0) Data frame received for 5\nI0423 14:15:49.064463 2683 log.go:172] (0xc0006241e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0423 14:15:49.066023 2683 log.go:172] (0xc000130fd0) Data frame received for 1\nI0423 14:15:49.066038 2683 log.go:172] (0xc0006cc960) (1) Data frame handling\nI0423 14:15:49.066058 2683 log.go:172] (0xc0006cc960) (1) Data frame sent\nI0423 14:15:49.066071 2683 log.go:172] (0xc000130fd0) (0xc0006cc960) Stream removed, broadcasting: 1\nI0423 14:15:49.066331 2683 log.go:172] (0xc000130fd0) Go away received\nI0423 14:15:49.066383 2683 log.go:172] (0xc000130fd0) (0xc0006cc960) Stream removed, broadcasting: 1\nI0423 14:15:49.066413 2683 log.go:172] (0xc000130fd0) (0xc0006cc000) Stream removed, broadcasting: 3\nI0423 14:15:49.066428 2683 log.go:172] (0xc000130fd0) (0xc0006241e0) Stream removed, broadcasting: 5\n" Apr 23 14:15:49.071: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 23 14:15:49.071: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 23 14:15:49.075: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 23 14:15:59.081: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 23 14:15:59.081: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 23 14:15:59.081: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 23 14:15:59.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 23 14:15:59.325: INFO: stderr: "I0423 14:15:59.207245 2704 log.go:172] (0xc000138dc0) (0xc000556820) Create stream\nI0423 14:15:59.207363 2704 log.go:172] (0xc000138dc0) (0xc000556820) Stream added, broadcasting: 1\nI0423 14:15:59.210460 2704 log.go:172] (0xc000138dc0) Reply frame received for 1\nI0423 14:15:59.210555 2704 log.go:172] (0xc000138dc0) (0xc000870000) Create stream\nI0423 14:15:59.210586 2704 log.go:172] (0xc000138dc0) (0xc000870000) Stream added, broadcasting: 3\nI0423 14:15:59.212241 2704 log.go:172] (0xc000138dc0) Reply frame received for 3\nI0423 14:15:59.212331 2704 log.go:172] (0xc000138dc0) (0xc000956000) Create stream\nI0423 14:15:59.212365 2704 log.go:172] (0xc000138dc0) (0xc000956000) Stream added, broadcasting: 5\nI0423 14:15:59.213867 2704 log.go:172] (0xc000138dc0) Reply frame received for 5\nI0423 14:15:59.317625 2704 log.go:172] (0xc000138dc0) Data frame received for 3\nI0423 14:15:59.317650 2704 log.go:172] (0xc000870000) (3) Data frame handling\nI0423 14:15:59.317664 2704 log.go:172] (0xc000870000) (3) Data frame sent\nI0423 14:15:59.317855 2704 log.go:172] (0xc000138dc0) Data frame received for 3\nI0423 14:15:59.317939 2704 log.go:172] (0xc000870000) (3) Data frame handling\nI0423 14:15:59.317974 2704 log.go:172] (0xc000138dc0) Data frame received for 5\nI0423 14:15:59.318015 2704 log.go:172] (0xc000956000) (5) Data frame handling\nI0423 14:15:59.318030 2704 log.go:172] (0xc000956000) (5) Data frame sent\nI0423 14:15:59.318050 2704 log.go:172] (0xc000138dc0) Data frame received for 5\nI0423 14:15:59.318072 2704 log.go:172] (0xc000956000) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0423 14:15:59.319601 2704 log.go:172] (0xc000138dc0) Data frame received for 1\nI0423 14:15:59.319626 2704 log.go:172] (0xc000556820) (1) Data frame handling\nI0423 14:15:59.319644 2704 log.go:172] (0xc000556820) (1) Data frame sent\nI0423 14:15:59.319665 2704 log.go:172] (0xc000138dc0) (0xc000556820) Stream removed, broadcasting: 1\nI0423 14:15:59.319682 2704 log.go:172] (0xc000138dc0) Go away received\nI0423 14:15:59.320083 2704 log.go:172] (0xc000138dc0) (0xc000556820) Stream removed, broadcasting: 1\nI0423 14:15:59.320103 2704 log.go:172] (0xc000138dc0) (0xc000870000) Stream removed, broadcasting: 3\nI0423 14:15:59.320113 2704 log.go:172] (0xc000138dc0) (0xc000956000) Stream removed, broadcasting: 5\n" Apr 23 14:15:59.326: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 23 14:15:59.326: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 23 14:15:59.326: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 23 14:15:59.577: INFO: stderr: "I0423 14:15:59.458747 2725 log.go:172] (0xc000a1a420) (0xc0002c06e0) Create stream\nI0423 14:15:59.458822 2725 log.go:172] (0xc000a1a420) (0xc0002c06e0) Stream added, broadcasting: 1\nI0423 14:15:59.462229 2725 log.go:172] (0xc000a1a420) Reply frame received for 1\nI0423 14:15:59.462307 2725 log.go:172] (0xc000a1a420) (0xc000834000) Create stream\nI0423 14:15:59.462343 2725 log.go:172] (0xc000a1a420) (0xc000834000) Stream added, broadcasting: 3\nI0423 14:15:59.463552 2725 log.go:172] (0xc000a1a420) Reply frame received for 3\nI0423 14:15:59.463587 2725 log.go:172] (0xc000a1a420) (0xc0008340a0) Create stream\nI0423 14:15:59.463607 2725 log.go:172] (0xc000a1a420) (0xc0008340a0) Stream added, broadcasting: 5\nI0423 14:15:59.464960 2725 log.go:172] (0xc000a1a420) Reply frame received for 5\nI0423 14:15:59.526998 2725 log.go:172] (0xc000a1a420) Data frame received for 5\nI0423 14:15:59.527027 2725 log.go:172] (0xc0008340a0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0423 14:15:59.527039 2725 log.go:172] (0xc0008340a0) (5) Data frame sent\nI0423 14:15:59.571144 2725 log.go:172] (0xc000a1a420) Data frame received for 3\nI0423 14:15:59.571184 2725 log.go:172] (0xc000834000) (3) Data frame handling\nI0423 14:15:59.571193 2725 log.go:172] (0xc000834000) (3) Data frame sent\nI0423 14:15:59.571201 2725 log.go:172] (0xc000a1a420) Data frame received for 3\nI0423 14:15:59.571205 2725 log.go:172] (0xc000834000) (3) Data frame handling\nI0423 14:15:59.571247 2725 log.go:172] (0xc000a1a420) Data frame received for 5\nI0423 14:15:59.571281 2725 log.go:172] (0xc0008340a0) (5) Data frame handling\nI0423 14:15:59.572715 2725 log.go:172] (0xc000a1a420) Data frame received for 1\nI0423 14:15:59.572737 2725 log.go:172] (0xc0002c06e0) (1) Data frame handling\nI0423 14:15:59.572754 2725 log.go:172] (0xc0002c06e0) (1) Data frame sent\nI0423 14:15:59.572776 2725 log.go:172] (0xc000a1a420) (0xc0002c06e0) Stream removed, broadcasting: 1\nI0423 14:15:59.572798 2725 log.go:172] (0xc000a1a420) Go away received\nI0423 14:15:59.573094 2725 log.go:172] (0xc000a1a420) (0xc0002c06e0) Stream removed, broadcasting: 1\nI0423 14:15:59.573220 2725 log.go:172] (0xc000a1a420) (0xc000834000) Stream removed, broadcasting: 3\nI0423 14:15:59.573235 2725 log.go:172] (0xc000a1a420) (0xc0008340a0) Stream removed, broadcasting: 5\n" Apr 23 14:15:59.577: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 23 14:15:59.577: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 23 14:15:59.577: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 23 14:15:59.801: INFO: stderr: "I0423 14:15:59.703514 2745 log.go:172] (0xc000a72420) (0xc00087c8c0) Create stream\nI0423 14:15:59.703571 2745 log.go:172] (0xc000a72420) (0xc00087c8c0) Stream added, broadcasting: 1\nI0423 14:15:59.706469 2745 log.go:172] (0xc000a72420) Reply frame received for 1\nI0423 14:15:59.706519 2745 log.go:172] (0xc000a72420) (0xc0005a1a40) Create stream\nI0423 14:15:59.706542 2745 log.go:172] (0xc000a72420) (0xc0005a1a40) Stream added, broadcasting: 3\nI0423 14:15:59.707617 2745 log.go:172] (0xc000a72420) Reply frame received for 3\nI0423 14:15:59.707659 2745 log.go:172] (0xc000a72420) (0xc00087c960) Create stream\nI0423 14:15:59.707672 2745 log.go:172] (0xc000a72420) (0xc00087c960) Stream added, broadcasting: 5\nI0423 14:15:59.708748 2745 log.go:172] (0xc000a72420) Reply frame received for 5\nI0423 14:15:59.771341 2745 log.go:172] (0xc000a72420) Data frame received for 5\nI0423 14:15:59.771376 2745 log.go:172] (0xc00087c960) (5) Data frame handling\nI0423 14:15:59.771405 2745 log.go:172] (0xc00087c960) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0423 14:15:59.793296 2745 log.go:172] (0xc000a72420) Data frame received for 5\nI0423 14:15:59.793332 2745 log.go:172] (0xc00087c960) (5) Data frame handling\nI0423 14:15:59.793366 2745 log.go:172] (0xc000a72420) Data frame received for 3\nI0423 14:15:59.793390 2745 log.go:172] (0xc0005a1a40) (3) Data frame handling\nI0423 14:15:59.793412 2745 log.go:172] (0xc0005a1a40) (3) Data frame sent\nI0423 14:15:59.793428 2745 log.go:172] (0xc000a72420) Data frame received for 3\nI0423 14:15:59.793453 2745 log.go:172] (0xc0005a1a40) (3) Data frame handling\nI0423 14:15:59.795183 2745 log.go:172] (0xc000a72420) Data frame received for 1\nI0423 14:15:59.795200 2745 log.go:172] (0xc00087c8c0) (1) Data frame handling\nI0423 14:15:59.795211 2745 log.go:172] (0xc00087c8c0) (1) Data frame sent\nI0423 14:15:59.795221 2745 log.go:172] (0xc000a72420) (0xc00087c8c0) Stream removed, broadcasting: 1\nI0423 14:15:59.795287 2745 log.go:172] (0xc000a72420) Go away received\nI0423 14:15:59.795534 2745 log.go:172] (0xc000a72420) (0xc00087c8c0) Stream removed, broadcasting: 1\nI0423 14:15:59.795555 2745 log.go:172] (0xc000a72420) (0xc0005a1a40) Stream removed, broadcasting: 3\nI0423 14:15:59.795564 2745 log.go:172] (0xc000a72420) (0xc00087c960) Stream removed, broadcasting: 5\n" Apr 23 14:15:59.801: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 23 14:15:59.801: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 23 14:15:59.801: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 14:15:59.804: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 23 14:16:09.814: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 23 14:16:09.814: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 23 14:16:09.814: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 23 14:16:09.844: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 14:16:09.844: INFO: ss-0 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC }] Apr 23 14:16:09.844: INFO: ss-1 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:37 +0000 UTC }] Apr 23 14:16:09.844: INFO: ss-2 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC }] Apr 23 14:16:09.844: INFO: Apr 23 14:16:09.844: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 23 14:16:10.849: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 14:16:10.849: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC }] Apr 23 14:16:10.849: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:37 +0000 UTC }] Apr 23 14:16:10.849: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC }] Apr 23 14:16:10.850: INFO: Apr 23 14:16:10.850: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 23 14:16:11.855: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 14:16:11.855: INFO: ss-0 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC }] Apr 23 14:16:11.855: INFO: ss-1 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:37 +0000 UTC }] Apr 23 14:16:11.855: INFO: ss-2 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC }] Apr 23 14:16:11.855: INFO: Apr 23 14:16:11.855: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 23 14:16:12.861: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 14:16:12.861: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC }] Apr 23 14:16:12.861: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC }] Apr 23 14:16:12.861: INFO: Apr 23 14:16:12.861: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 23 14:16:13.866: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 14:16:13.866: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC }] Apr 23 14:16:13.866: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC }] Apr 23 14:16:13.866: INFO: Apr 23 14:16:13.866: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 23 14:16:14.870: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 14:16:14.870: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC }] Apr 23 14:16:14.870: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC }] Apr 23 14:16:14.870: INFO: Apr 23 14:16:14.870: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 23 14:16:15.875: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 14:16:15.876: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC }] Apr 23 14:16:15.876: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC }] Apr 23 14:16:15.876: INFO: Apr 23 14:16:15.876: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 23 14:16:16.881: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 14:16:16.881: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC }] Apr 23 14:16:16.881: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC }] Apr 23 14:16:16.881: INFO: Apr 23 14:16:16.881: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 23 14:16:17.886: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 14:16:17.886: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC }] Apr 23 14:16:17.886: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC }] Apr 23 14:16:17.886: INFO: Apr 23 14:16:17.886: INFO: StatefulSet ss has not reached scale 0, at 2 Apr 23 14:16:18.891: INFO: POD NODE PHASE GRACE CONDITIONS Apr 23 14:16:18.891: INFO: ss-0 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:17 +0000 UTC }] Apr 23 14:16:18.891: INFO: ss-2 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:16:00 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-23 14:15:38 +0000 UTC }] Apr 23 14:16:18.891: INFO: Apr 23 14:16:18.891: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7283 Apr 23 14:16:19.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:16:20.027: INFO: rc: 1 Apr 23 14:16:20.028: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc0021b90e0 exit status 1 true [0xc000361bb8 0xc000361cb8 0xc000361e78] [0xc000361bb8 0xc000361cb8 0xc000361e78] [0xc000361c50 0xc000361d68] [0xba70e0 0xba70e0] 0xc0024ca660 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Apr 23 14:16:30.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:16:30.122: INFO: rc: 1 Apr 23 14:16:30.122: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001cd79e0 exit status 1 true [0xc000978440 0xc000978510 0xc000978548] [0xc000978440 0xc000978510 0xc000978548] [0xc000978500 0xc000978538] [0xba70e0 0xba70e0] 0xc002fce780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:16:40.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:16:40.232: INFO: rc: 1 Apr 23 14:16:40.232: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021b91a0 exit status 1 true [0xc000361ea0 0xc002a96000 0xc002a96018] [0xc000361ea0 0xc002a96000 0xc002a96018] [0xc000361f68 0xc002a96010] [0xba70e0 0xba70e0] 0xc0024ca960 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:16:50.232: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:16:50.333: INFO: rc: 1 Apr 23 14:16:50.333: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021b9290 exit status 1 true [0xc002a96020 0xc002a96038 0xc002a96050] [0xc002a96020 0xc002a96038 0xc002a96050] [0xc002a96030 0xc002a96048] [0xba70e0 0xba70e0] 0xc0024caf00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:17:00.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:17:00.426: INFO: rc: 1 Apr 23 14:17:00.426: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021b9350 exit status 1 true [0xc002a96058 0xc002a96070 0xc002a96088] [0xc002a96058 0xc002a96070 0xc002a96088] [0xc002a96068 0xc002a96080] [0xba70e0 0xba70e0] 0xc0024cb5c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:17:10.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:17:10.531: INFO: rc: 1 Apr 23 14:17:10.531: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002a12120 exit status 1 true [0xc002c6c000 0xc002c6c038 0xc002c6c078] [0xc002c6c000 0xc002c6c038 0xc002c6c078] [0xc002c6c020 0xc002c6c060] [0xba70e0 0xba70e0] 0xc001cc6780 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:17:20.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:17:20.634: INFO: rc: 1 Apr 23 14:17:20.634: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8bd40 exit status 1 true [0xc000aaa078 0xc000aaa090 0xc000aaa0a8] [0xc000aaa078 0xc000aaa090 0xc000aaa0a8] [0xc000aaa088 0xc000aaa0a0] [0xba70e0 0xba70e0] 0xc00263f200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:17:30.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:17:30.745: INFO: rc: 1 Apr 23 14:17:30.745: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8be00 exit status 1 true [0xc000aaa0b0 0xc000aaa0c8 0xc000aaa0e0] [0xc000aaa0b0 0xc000aaa0c8 0xc000aaa0e0] [0xc000aaa0c0 0xc000aaa0d8] [0xba70e0 0xba70e0] 0xc00263f5c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:17:40.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:17:40.843: INFO: rc: 1 Apr 23 14:17:40.843: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8bef0 exit status 1 true [0xc000aaa0e8 0xc000aaa100 0xc000aaa118] [0xc000aaa0e8 0xc000aaa100 0xc000aaa118] [0xc000aaa0f8 0xc000aaa110] [0xba70e0 0xba70e0] 0xc00263fa40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:17:50.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:17:50.941: INFO: rc: 1 Apr 23 14:17:50.941: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8bfb0 exit status 1 true [0xc000aaa120 0xc000aaa138 0xc000aaa150] [0xc000aaa120 0xc000aaa138 0xc000aaa150] [0xc000aaa130 0xc000aaa148] [0xba70e0 0xba70e0] 0xc00263fd40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:18:00.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:18:01.036: INFO: rc: 1 Apr 23 14:18:01.036: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f9e090 exit status 1 true [0xc000aaa158 0xc000aaa170 0xc000aaa188] [0xc000aaa158 0xc000aaa170 0xc000aaa188] [0xc000aaa168 0xc000aaa180] [0xba70e0 0xba70e0] 0xc00279efc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:18:11.036: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:18:11.148: INFO: rc: 1 Apr 23 14:18:11.148: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8a090 exit status 1 true [0xc000360ae8 0xc000360c58 0xc0003610b8] [0xc000360ae8 0xc000360c58 0xc0003610b8] [0xc000360c18 0xc000360cc0] [0xba70e0 0xba70e0] 0xc00263e3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:18:21.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:18:21.256: INFO: rc: 1 Apr 23 14:18:21.256: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8a180 exit status 1 true [0xc000361158 0xc0003612a8 0xc000361340] [0xc000361158 0xc0003612a8 0xc000361340] [0xc000361260 0xc000361310] [0xba70e0 0xba70e0] 0xc00263eae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:18:31.256: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:18:31.372: INFO: rc: 1 Apr 23 14:18:31.372: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00332c0c0 exit status 1 true [0xc000aaa000 0xc000aaa018 0xc000aaa030] [0xc000aaa000 0xc000aaa018 0xc000aaa030] [0xc000aaa010 0xc000aaa028] [0xba70e0 0xba70e0] 0xc00262b980 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:18:41.373: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:18:41.468: INFO: rc: 1 Apr 23 14:18:41.468: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8a240 exit status 1 true [0xc0003613b0 0xc0003614d0 0xc0003615e8] [0xc0003613b0 0xc0003614d0 0xc0003615e8] [0xc0003614a0 0xc000361560] [0xba70e0 0xba70e0] 0xc00263f260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:18:51.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:18:51.575: INFO: rc: 1 Apr 23 14:18:51.575: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8a300 exit status 1 true [0xc000361610 0xc000361750 0xc000361888] [0xc000361610 0xc000361750 0xc000361888] [0xc000361680 0xc0003617f8] [0xba70e0 0xba70e0] 0xc00263f7a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:19:01.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:19:01.692: INFO: rc: 1 Apr 23 14:19:01.692: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f9e150 exit status 1 true [0xc002a96000 0xc002a96018 0xc002a96030] [0xc002a96000 0xc002a96018 0xc002a96030] [0xc002a96010 0xc002a96028] [0xba70e0 0xba70e0] 0xc00279f200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:19:11.693: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:19:13.928: INFO: rc: 1 Apr 23 14:19:13.928: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021b80c0 exit status 1 true [0xc000978030 0xc000978058 0xc000978120] [0xc000978030 0xc000978058 0xc000978120] [0xc000978050 0xc0009780c8] [0xba70e0 0xba70e0] 0xc0024ca240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:19:23.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:19:24.026: INFO: rc: 1 Apr 23 14:19:24.026: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f9e240 exit status 1 true [0xc002a96038 0xc002a96050 0xc002a96068] [0xc002a96038 0xc002a96050 0xc002a96068] [0xc002a96048 0xc002a96060] [0xba70e0 0xba70e0] 0xc00279fce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:19:34.026: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:19:34.139: INFO: rc: 1 Apr 23 14:19:34.139: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8a420 exit status 1 true [0xc000361928 0xc000361978 0xc000361a10] [0xc000361928 0xc000361978 0xc000361a10] [0xc000361968 0xc0003619c8] [0xba70e0 0xba70e0] 0xc00263faa0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:19:44.139: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:19:44.241: INFO: rc: 1 Apr 23 14:19:44.241: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00332c180 exit status 1 true [0xc000aaa038 0xc000aaa050 0xc000aaa068] [0xc000aaa038 0xc000aaa050 0xc000aaa068] [0xc000aaa048 0xc000aaa060] [0xba70e0 0xba70e0] 0xc00262bda0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:19:54.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:19:54.343: INFO: rc: 1 Apr 23 14:19:54.344: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021b8180 exit status 1 true [0xc000978150 0xc000978240 0xc0009782e8] [0xc000978150 0xc000978240 0xc0009782e8] [0xc000978208 0xc0009782c8] [0xba70e0 0xba70e0] 0xc0024ca600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:20:04.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:20:04.441: INFO: rc: 1 Apr 23 14:20:04.441: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc0021b8270 exit status 1 true [0xc000978350 0xc0009783c0 0xc0009784b8] [0xc000978350 0xc0009783c0 0xc0009784b8] [0xc0009783b0 0xc000978440] [0xba70e0 0xba70e0] 0xc0024ca900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:20:14.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:20:14.542: INFO: rc: 1 Apr 23 14:20:14.542: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f9e0c0 exit status 1 true [0xc002a96000 0xc002a96018 0xc002a96030] [0xc002a96000 0xc002a96018 0xc002a96030] [0xc002a96010 0xc002a96028] [0xba70e0 0xba70e0] 0xc00279f200 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:20:24.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:20:24.653: INFO: rc: 1 Apr 23 14:20:24.653: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f9e1e0 exit status 1 true [0xc002a96038 0xc002a96050 0xc002a96068] [0xc002a96038 0xc002a96050 0xc002a96068] [0xc002a96048 0xc002a96060] [0xba70e0 0xba70e0] 0xc00279fce0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:20:34.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:20:34.767: INFO: rc: 1 Apr 23 14:20:34.767: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00332c090 exit status 1 true [0xc000978030 0xc000978058 0xc000978120] [0xc000978030 0xc000978058 0xc000978120] [0xc000978050 0xc0009780c8] [0xba70e0 0xba70e0] 0xc0024ca240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:20:44.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:20:44.863: INFO: rc: 1 Apr 23 14:20:44.863: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8a0f0 exit status 1 true [0xc000360298 0xc000360c18 0xc000360cc0] [0xc000360298 0xc000360c18 0xc000360cc0] [0xc000360bf0 0xc000360ca0] [0xba70e0 0xba70e0] 0xc00263e3c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:20:54.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:20:54.967: INFO: rc: 1 Apr 23 14:20:54.967: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f8a1e0 exit status 1 true [0xc0003610b8 0xc000361260 0xc000361310] [0xc0003610b8 0xc000361260 0xc000361310] [0xc000361250 0xc0003612f0] [0xba70e0 0xba70e0] 0xc00263eae0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:21:04.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:21:05.058: INFO: rc: 1 Apr 23 14:21:05.059: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00332c1b0 exit status 1 true [0xc000978150 0xc000978240 0xc0009782e8] [0xc000978150 0xc000978240 0xc0009782e8] [0xc000978208 0xc0009782c8] [0xba70e0 0xba70e0] 0xc0024ca600 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:21:15.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:21:15.160: INFO: rc: 1 Apr 23 14:21:15.160: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001f9e360 exit status 1 true [0xc002a96070 0xc002a96088 0xc002a960a0] [0xc002a96070 0xc002a96088 0xc002a960a0] [0xc002a96080 0xc002a96098] [0xba70e0 0xba70e0] 0xc00262aa80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 23 14:21:25.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7283 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 23 14:21:25.269: INFO: rc: 1 Apr 23 14:21:25.269: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Apr 23 14:21:25.269: INFO: Scaling statefulset ss to 0 Apr 23 14:21:25.277: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 23 14:21:25.279: INFO: Deleting all statefulset in ns statefulset-7283 Apr 23 14:21:25.281: INFO: Scaling statefulset ss to 0 Apr 23 14:21:25.289: INFO: Waiting for statefulset status.replicas updated to 0 Apr 23 14:21:25.291: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:21:25.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7283" for this suite. Apr 23 14:21:31.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:21:31.395: INFO: namespace statefulset-7283 deletion completed in 6.088154704s • [SLOW TEST:373.733 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:21:31.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Apr 23 14:21:35.971: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9473 pod-service-account-37a11307-90b6-4cca-a2bb-d0dd60d58362 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 23 14:21:36.214: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9473 pod-service-account-37a11307-90b6-4cca-a2bb-d0dd60d58362 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 23 14:21:36.418: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9473 pod-service-account-37a11307-90b6-4cca-a2bb-d0dd60d58362 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:21:36.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9473" for this suite. Apr 23 14:21:42.680: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:21:42.754: INFO: namespace svcaccounts-9473 deletion completed in 6.108794258s • [SLOW TEST:11.359 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:21:42.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:21:42.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8618" for this suite. Apr 23 14:22:04.883: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:22:04.955: INFO: namespace pods-8618 deletion completed in 22.129744856s • [SLOW TEST:22.200 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:22:04.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 23 14:22:09.561: INFO: Successfully updated pod "annotationupdate8ecc821f-26d8-4f60-96e5-7f66bea5535f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:22:11.612: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2526" for this suite. Apr 23 14:22:33.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:22:33.711: INFO: namespace downward-api-2526 deletion completed in 22.094721539s • [SLOW TEST:28.756 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:22:33.711: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-874ba1a1-ada5-42db-a3c1-107592536b03 STEP: Creating configMap with name cm-test-opt-upd-0cd9af3a-356d-44ea-bd3e-9094b3eceeeb STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-874ba1a1-ada5-42db-a3c1-107592536b03 STEP: Updating configmap cm-test-opt-upd-0cd9af3a-356d-44ea-bd3e-9094b3eceeeb STEP: Creating configMap with name cm-test-opt-create-992e416d-9e6b-4bcd-a168-eecaefc24d8e STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:23:50.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4012" for this suite. Apr 23 14:24:12.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:24:12.320: INFO: namespace projected-4012 deletion completed in 22.07575353s • [SLOW TEST:98.609 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:24:12.320: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:25:12.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-400" for this suite. Apr 23 14:25:34.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:25:34.500: INFO: namespace container-probe-400 deletion completed in 22.088859085s • [SLOW TEST:82.179 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:25:34.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 14:25:34.553: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6fbe20bd-47e1-4ddd-b660-9fed509b7580" in namespace "projected-5067" to be "success or failure" Apr 23 14:25:34.585: INFO: Pod "downwardapi-volume-6fbe20bd-47e1-4ddd-b660-9fed509b7580": Phase="Pending", Reason="", readiness=false. Elapsed: 32.277074ms Apr 23 14:25:36.699: INFO: Pod "downwardapi-volume-6fbe20bd-47e1-4ddd-b660-9fed509b7580": Phase="Pending", Reason="", readiness=false. Elapsed: 2.146444448s Apr 23 14:25:38.703: INFO: Pod "downwardapi-volume-6fbe20bd-47e1-4ddd-b660-9fed509b7580": Phase="Pending", Reason="", readiness=false. Elapsed: 4.15058627s Apr 23 14:25:40.707: INFO: Pod "downwardapi-volume-6fbe20bd-47e1-4ddd-b660-9fed509b7580": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.154782949s STEP: Saw pod success Apr 23 14:25:40.707: INFO: Pod "downwardapi-volume-6fbe20bd-47e1-4ddd-b660-9fed509b7580" satisfied condition "success or failure" Apr 23 14:25:40.710: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6fbe20bd-47e1-4ddd-b660-9fed509b7580 container client-container: STEP: delete the pod Apr 23 14:25:40.742: INFO: Waiting for pod downwardapi-volume-6fbe20bd-47e1-4ddd-b660-9fed509b7580 to disappear Apr 23 14:25:40.760: INFO: Pod downwardapi-volume-6fbe20bd-47e1-4ddd-b660-9fed509b7580 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:25:40.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5067" for this suite. Apr 23 14:25:46.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:25:46.864: INFO: namespace projected-5067 deletion completed in 6.100392424s • [SLOW TEST:12.363 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:25:46.864: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 23 14:25:50.991: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 23 14:25:56.076: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:25:56.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-954" for this suite. Apr 23 14:26:02.096: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:26:02.172: INFO: namespace pods-954 deletion completed in 6.0880096s • [SLOW TEST:15.309 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:26:02.173: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 14:26:02.290: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c842f7cf-95b8-4596-a8cf-1a39e81c015b" in namespace "projected-9534" to be "success or failure" Apr 23 14:26:02.309: INFO: Pod "downwardapi-volume-c842f7cf-95b8-4596-a8cf-1a39e81c015b": Phase="Pending", Reason="", readiness=false. Elapsed: 19.434726ms Apr 23 14:26:04.313: INFO: Pod "downwardapi-volume-c842f7cf-95b8-4596-a8cf-1a39e81c015b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023333952s Apr 23 14:26:06.319: INFO: Pod "downwardapi-volume-c842f7cf-95b8-4596-a8cf-1a39e81c015b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028822036s STEP: Saw pod success Apr 23 14:26:06.319: INFO: Pod "downwardapi-volume-c842f7cf-95b8-4596-a8cf-1a39e81c015b" satisfied condition "success or failure" Apr 23 14:26:06.322: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c842f7cf-95b8-4596-a8cf-1a39e81c015b container client-container: STEP: delete the pod Apr 23 14:26:06.353: INFO: Waiting for pod downwardapi-volume-c842f7cf-95b8-4596-a8cf-1a39e81c015b to disappear Apr 23 14:26:06.356: INFO: Pod downwardapi-volume-c842f7cf-95b8-4596-a8cf-1a39e81c015b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:26:06.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9534" for this suite. Apr 23 14:26:12.486: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:26:12.652: INFO: namespace projected-9534 deletion completed in 6.291026794s • [SLOW TEST:10.480 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:26:12.653: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-34500ff7-c3d7-401c-822e-2624d02dc9d8 STEP: Creating a pod to test consume configMaps Apr 23 14:26:12.756: INFO: Waiting up to 5m0s for pod "pod-configmaps-3826f521-dc55-4fab-ac1e-0a4222ef19aa" in namespace "configmap-599" to be "success or failure" Apr 23 14:26:12.792: INFO: Pod "pod-configmaps-3826f521-dc55-4fab-ac1e-0a4222ef19aa": Phase="Pending", Reason="", readiness=false. Elapsed: 35.337652ms Apr 23 14:26:14.796: INFO: Pod "pod-configmaps-3826f521-dc55-4fab-ac1e-0a4222ef19aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039577329s Apr 23 14:26:16.806: INFO: Pod "pod-configmaps-3826f521-dc55-4fab-ac1e-0a4222ef19aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.049343237s STEP: Saw pod success Apr 23 14:26:16.806: INFO: Pod "pod-configmaps-3826f521-dc55-4fab-ac1e-0a4222ef19aa" satisfied condition "success or failure" Apr 23 14:26:16.808: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-3826f521-dc55-4fab-ac1e-0a4222ef19aa container configmap-volume-test: STEP: delete the pod Apr 23 14:26:16.822: INFO: Waiting for pod pod-configmaps-3826f521-dc55-4fab-ac1e-0a4222ef19aa to disappear Apr 23 14:26:16.839: INFO: Pod pod-configmaps-3826f521-dc55-4fab-ac1e-0a4222ef19aa no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:26:16.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-599" for this suite. Apr 23 14:26:22.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:26:22.935: INFO: namespace configmap-599 deletion completed in 6.092777081s • [SLOW TEST:10.283 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:26:22.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 14:26:23.028: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc59782b-df87-49ec-b12a-7eb1c56f63ba" in namespace "downward-api-4206" to be "success or failure" Apr 23 14:26:23.043: INFO: Pod "downwardapi-volume-dc59782b-df87-49ec-b12a-7eb1c56f63ba": Phase="Pending", Reason="", readiness=false. Elapsed: 15.137411ms Apr 23 14:26:25.047: INFO: Pod "downwardapi-volume-dc59782b-df87-49ec-b12a-7eb1c56f63ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018958611s Apr 23 14:26:27.071: INFO: Pod "downwardapi-volume-dc59782b-df87-49ec-b12a-7eb1c56f63ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043167727s STEP: Saw pod success Apr 23 14:26:27.071: INFO: Pod "downwardapi-volume-dc59782b-df87-49ec-b12a-7eb1c56f63ba" satisfied condition "success or failure" Apr 23 14:26:27.074: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-dc59782b-df87-49ec-b12a-7eb1c56f63ba container client-container: STEP: delete the pod Apr 23 14:26:27.089: INFO: Waiting for pod downwardapi-volume-dc59782b-df87-49ec-b12a-7eb1c56f63ba to disappear Apr 23 14:26:27.093: INFO: Pod downwardapi-volume-dc59782b-df87-49ec-b12a-7eb1c56f63ba no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:26:27.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4206" for this suite. Apr 23 14:26:33.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:26:33.188: INFO: namespace downward-api-4206 deletion completed in 6.090956262s • [SLOW TEST:10.252 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:26:33.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 23 14:26:33.251: INFO: Waiting up to 5m0s for pod "pod-a3a7fcc6-a782-438a-afa0-42e41b09d8b9" in namespace "emptydir-1326" to be "success or failure" Apr 23 14:26:33.255: INFO: Pod "pod-a3a7fcc6-a782-438a-afa0-42e41b09d8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.240444ms Apr 23 14:26:35.260: INFO: Pod "pod-a3a7fcc6-a782-438a-afa0-42e41b09d8b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008324753s Apr 23 14:26:37.264: INFO: Pod "pod-a3a7fcc6-a782-438a-afa0-42e41b09d8b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012505724s STEP: Saw pod success Apr 23 14:26:37.264: INFO: Pod "pod-a3a7fcc6-a782-438a-afa0-42e41b09d8b9" satisfied condition "success or failure" Apr 23 14:26:37.267: INFO: Trying to get logs from node iruya-worker pod pod-a3a7fcc6-a782-438a-afa0-42e41b09d8b9 container test-container: STEP: delete the pod Apr 23 14:26:37.287: INFO: Waiting for pod pod-a3a7fcc6-a782-438a-afa0-42e41b09d8b9 to disappear Apr 23 14:26:37.294: INFO: Pod pod-a3a7fcc6-a782-438a-afa0-42e41b09d8b9 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:26:37.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1326" for this suite. Apr 23 14:26:43.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:26:43.387: INFO: namespace emptydir-1326 deletion completed in 6.089354621s • [SLOW TEST:10.198 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:26:43.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-7cf9b816-2876-4a61-8cb6-2b5e3bed4fec [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:26:43.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6455" for this suite. Apr 23 14:26:49.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:26:49.528: INFO: namespace secrets-6455 deletion completed in 6.104986222s • [SLOW TEST:6.141 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:26:49.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-26ee76fa-5942-45f8-9d78-a50c5d239098 STEP: Creating secret with name s-test-opt-upd-ef6b1ff3-15d4-4d81-9cb7-52991c63096a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-26ee76fa-5942-45f8-9d78-a50c5d239098 STEP: Updating secret s-test-opt-upd-ef6b1ff3-15d4-4d81-9cb7-52991c63096a STEP: Creating secret with name s-test-opt-create-c1f58041-72ed-4bab-b825-57d59bd0fd31 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:28:16.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1404" for this suite. Apr 23 14:28:38.111: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:28:38.186: INFO: namespace secrets-1404 deletion completed in 22.101305745s • [SLOW TEST:108.657 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:28:38.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 23 14:28:46.316: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 14:28:46.324: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 14:28:48.324: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 14:28:48.329: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 14:28:50.324: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 14:28:50.328: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 14:28:52.324: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 14:28:52.328: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 14:28:54.324: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 14:28:54.328: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 14:28:56.324: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 14:28:56.329: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 14:28:58.324: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 14:28:58.328: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 14:29:00.324: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 14:29:00.327: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 14:29:02.324: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 14:29:02.328: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 14:29:04.324: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 14:29:04.328: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 14:29:06.324: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 14:29:06.328: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 14:29:08.324: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 14:29:08.328: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 14:29:10.324: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 14:29:10.328: INFO: Pod pod-with-prestop-exec-hook still exists Apr 23 14:29:12.324: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 23 14:29:12.328: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:29:12.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-4218" for this suite. Apr 23 14:29:34.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:29:34.433: INFO: namespace container-lifecycle-hook-4218 deletion completed in 22.095241111s • [SLOW TEST:56.247 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:29:34.434: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 23 14:29:34.514: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Apr 23 14:29:35.121: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 23 14:29:37.263: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723248975, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723248975, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63723248975, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63723248975, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 23 14:29:39.898: INFO: Waited 623.095173ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:29:40.281: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-9405" for this suite. Apr 23 14:29:46.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:29:46.606: INFO: namespace aggregator-9405 deletion completed in 6.272330149s • [SLOW TEST:12.172 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:29:46.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Apr 23 14:29:46.705: INFO: Waiting up to 5m0s for pod "pod-7f5fef46-261f-43b1-9508-dd24a56d1052" in namespace "emptydir-7135" to be "success or failure" Apr 23 14:29:46.735: INFO: Pod "pod-7f5fef46-261f-43b1-9508-dd24a56d1052": Phase="Pending", Reason="", readiness=false. Elapsed: 30.235557ms Apr 23 14:29:48.739: INFO: Pod "pod-7f5fef46-261f-43b1-9508-dd24a56d1052": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033830852s Apr 23 14:29:50.742: INFO: Pod "pod-7f5fef46-261f-43b1-9508-dd24a56d1052": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037009155s STEP: Saw pod success Apr 23 14:29:50.742: INFO: Pod "pod-7f5fef46-261f-43b1-9508-dd24a56d1052" satisfied condition "success or failure" Apr 23 14:29:50.744: INFO: Trying to get logs from node iruya-worker2 pod pod-7f5fef46-261f-43b1-9508-dd24a56d1052 container test-container: STEP: delete the pod Apr 23 14:29:50.776: INFO: Waiting for pod pod-7f5fef46-261f-43b1-9508-dd24a56d1052 to disappear Apr 23 14:29:50.782: INFO: Pod pod-7f5fef46-261f-43b1-9508-dd24a56d1052 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:29:50.782: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7135" for this suite. Apr 23 14:29:56.816: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:29:56.890: INFO: namespace emptydir-7135 deletion completed in 6.08499653s • [SLOW TEST:10.284 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:29:56.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 23 14:30:01.490: INFO: Successfully updated pod "labelsupdate5974c3cf-2045-4019-b0ac-4f872338e6f1" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:30:03.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9750" for this suite. Apr 23 14:30:25.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:30:25.625: INFO: namespace downward-api-9750 deletion completed in 22.101858595s • [SLOW TEST:28.734 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:30:25.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:30:25.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6817" for this suite. Apr 23 14:30:31.784: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:30:31.884: INFO: namespace kubelet-test-6817 deletion completed in 6.107192976s • [SLOW TEST:6.259 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:30:31.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-4507/secret-test-0173db29-1c13-4ce3-8156-934594bfae03 STEP: Creating a pod to test consume secrets Apr 23 14:30:31.964: INFO: Waiting up to 5m0s for pod "pod-configmaps-75fa20aa-fc32-455c-89b4-3cf31aa58913" in namespace "secrets-4507" to be "success or failure" Apr 23 14:30:31.984: INFO: Pod "pod-configmaps-75fa20aa-fc32-455c-89b4-3cf31aa58913": Phase="Pending", Reason="", readiness=false. Elapsed: 19.543638ms Apr 23 14:30:33.988: INFO: Pod "pod-configmaps-75fa20aa-fc32-455c-89b4-3cf31aa58913": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023899794s Apr 23 14:30:35.993: INFO: Pod "pod-configmaps-75fa20aa-fc32-455c-89b4-3cf31aa58913": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028789566s STEP: Saw pod success Apr 23 14:30:35.993: INFO: Pod "pod-configmaps-75fa20aa-fc32-455c-89b4-3cf31aa58913" satisfied condition "success or failure" Apr 23 14:30:35.996: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-75fa20aa-fc32-455c-89b4-3cf31aa58913 container env-test: STEP: delete the pod Apr 23 14:30:36.118: INFO: Waiting for pod pod-configmaps-75fa20aa-fc32-455c-89b4-3cf31aa58913 to disappear Apr 23 14:30:36.143: INFO: Pod pod-configmaps-75fa20aa-fc32-455c-89b4-3cf31aa58913 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:30:36.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4507" for this suite. Apr 23 14:30:42.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:30:42.272: INFO: namespace secrets-4507 deletion completed in 6.117161631s • [SLOW TEST:10.386 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:30:42.272: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 23 14:30:47.433: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:30:48.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-3901" for this suite. Apr 23 14:31:10.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:31:10.609: INFO: namespace replicaset-3901 deletion completed in 22.151752768s • [SLOW TEST:28.337 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:31:10.610: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-7d098c99-a3f0-432e-a074-e218c1ffc14a STEP: Creating a pod to test consume configMaps Apr 23 14:31:10.671: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-2a2e1e0c-7a90-4b7c-907c-081fc5db0daa" in namespace "projected-3971" to be "success or failure" Apr 23 14:31:10.675: INFO: Pod "pod-projected-configmaps-2a2e1e0c-7a90-4b7c-907c-081fc5db0daa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.912693ms Apr 23 14:31:12.678: INFO: Pod "pod-projected-configmaps-2a2e1e0c-7a90-4b7c-907c-081fc5db0daa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007537326s Apr 23 14:31:14.683: INFO: Pod "pod-projected-configmaps-2a2e1e0c-7a90-4b7c-907c-081fc5db0daa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012230695s STEP: Saw pod success Apr 23 14:31:14.683: INFO: Pod "pod-projected-configmaps-2a2e1e0c-7a90-4b7c-907c-081fc5db0daa" satisfied condition "success or failure" Apr 23 14:31:14.686: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-2a2e1e0c-7a90-4b7c-907c-081fc5db0daa container projected-configmap-volume-test: STEP: delete the pod Apr 23 14:31:14.701: INFO: Waiting for pod pod-projected-configmaps-2a2e1e0c-7a90-4b7c-907c-081fc5db0daa to disappear Apr 23 14:31:14.705: INFO: Pod pod-projected-configmaps-2a2e1e0c-7a90-4b7c-907c-081fc5db0daa no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:31:14.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3971" for this suite. Apr 23 14:31:20.721: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:31:20.802: INFO: namespace projected-3971 deletion completed in 6.093401315s • [SLOW TEST:10.192 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:31:20.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-2696e005-0824-4635-b1ab-a7c6dca46cf8 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:31:20.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-161" for this suite. Apr 23 14:31:26.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:31:27.017: INFO: namespace configmap-161 deletion completed in 6.132595402s • [SLOW TEST:6.215 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:31:27.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:31:32.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7922" for this suite. Apr 23 14:31:38.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:31:38.802: INFO: namespace watch-7922 deletion completed in 6.17844757s • [SLOW TEST:11.785 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:31:38.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 23 14:31:38.847: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 23 14:31:38.865: INFO: Waiting for terminating namespaces to be deleted... Apr 23 14:31:38.868: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 23 14:31:38.872: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 23 14:31:38.872: INFO: Container kube-proxy ready: true, restart count 0 Apr 23 14:31:38.872: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 23 14:31:38.872: INFO: Container kindnet-cni ready: true, restart count 0 Apr 23 14:31:38.872: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 23 14:31:38.886: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 23 14:31:38.886: INFO: Container coredns ready: true, restart count 0 Apr 23 14:31:38.886: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 23 14:31:38.886: INFO: Container coredns ready: true, restart count 0 Apr 23 14:31:38.886: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 23 14:31:38.886: INFO: Container kube-proxy ready: true, restart count 0 Apr 23 14:31:38.886: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 23 14:31:38.886: INFO: Container kindnet-cni ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-6805e74e-1275-4100-88ce-31ab4d275482 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-6805e74e-1275-4100-88ce-31ab4d275482 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-6805e74e-1275-4100-88ce-31ab4d275482 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:31:47.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-493" for this suite. Apr 23 14:31:55.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:31:55.268: INFO: namespace sched-pred-493 deletion completed in 8.102377249s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:16.465 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:31:55.268: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0423 14:31:56.410207 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 23 14:31:56.410: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:31:56.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-110" for this suite. Apr 23 14:32:02.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:32:02.519: INFO: namespace gc-110 deletion completed in 6.106813708s • [SLOW TEST:7.251 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:32:02.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 23 14:32:02.619: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-9209,SelfLink:/api/v1/namespaces/watch-9209/configmaps/e2e-watch-test-resource-version,UID:f921d024-1c27-4d6d-9cd1-006acb00f3d5,ResourceVersion:7015990,Generation:0,CreationTimestamp:2020-04-23 14:32:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 23 14:32:02.619: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-9209,SelfLink:/api/v1/namespaces/watch-9209/configmaps/e2e-watch-test-resource-version,UID:f921d024-1c27-4d6d-9cd1-006acb00f3d5,ResourceVersion:7015991,Generation:0,CreationTimestamp:2020-04-23 14:32:02 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:32:02.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9209" for this suite. Apr 23 14:32:08.662: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:32:08.736: INFO: namespace watch-9209 deletion completed in 6.10718311s • [SLOW TEST:6.216 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:32:08.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0423 14:32:39.335207 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 23 14:32:39.335: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:32:39.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4305" for this suite. Apr 23 14:32:45.369: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:32:45.515: INFO: namespace gc-4305 deletion completed in 6.176690402s • [SLOW TEST:36.779 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:32:45.515: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 23 14:32:50.109: INFO: Successfully updated pod "pod-update-activedeadlineseconds-418179bd-2568-43f6-b963-8868a62328dd" Apr 23 14:32:50.110: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-418179bd-2568-43f6-b963-8868a62328dd" in namespace "pods-4912" to be "terminated due to deadline exceeded" Apr 23 14:32:50.126: INFO: Pod "pod-update-activedeadlineseconds-418179bd-2568-43f6-b963-8868a62328dd": Phase="Running", Reason="", readiness=true. Elapsed: 16.562253ms Apr 23 14:32:52.131: INFO: Pod "pod-update-activedeadlineseconds-418179bd-2568-43f6-b963-8868a62328dd": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.021005943s Apr 23 14:32:52.131: INFO: Pod "pod-update-activedeadlineseconds-418179bd-2568-43f6-b963-8868a62328dd" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:32:52.131: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4912" for this suite. Apr 23 14:32:58.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:32:58.225: INFO: namespace pods-4912 deletion completed in 6.089430896s • [SLOW TEST:12.710 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:32:58.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2003.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2003.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2003.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2003.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2003.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2003.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 23 14:33:04.318: INFO: DNS probes using dns-2003/dns-test-bbc14e9b-a7f8-4b94-b870-0127469243d7 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:33:04.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2003" for this suite. Apr 23 14:33:10.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:33:10.483: INFO: namespace dns-2003 deletion completed in 6.08934449s • [SLOW TEST:12.257 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:33:10.484: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 23 14:33:10.555: INFO: Waiting up to 5m0s for pod "downwardapi-volume-9e2fd66c-ed2c-406d-b46f-a642b66ab69b" in namespace "projected-5529" to be "success or failure" Apr 23 14:33:10.560: INFO: Pod "downwardapi-volume-9e2fd66c-ed2c-406d-b46f-a642b66ab69b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.744165ms Apr 23 14:33:12.565: INFO: Pod "downwardapi-volume-9e2fd66c-ed2c-406d-b46f-a642b66ab69b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010201658s Apr 23 14:33:14.568: INFO: Pod "downwardapi-volume-9e2fd66c-ed2c-406d-b46f-a642b66ab69b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013676761s STEP: Saw pod success Apr 23 14:33:14.568: INFO: Pod "downwardapi-volume-9e2fd66c-ed2c-406d-b46f-a642b66ab69b" satisfied condition "success or failure" Apr 23 14:33:14.571: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-9e2fd66c-ed2c-406d-b46f-a642b66ab69b container client-container: STEP: delete the pod Apr 23 14:33:14.617: INFO: Waiting for pod downwardapi-volume-9e2fd66c-ed2c-406d-b46f-a642b66ab69b to disappear Apr 23 14:33:14.667: INFO: Pod downwardapi-volume-9e2fd66c-ed2c-406d-b46f-a642b66ab69b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:33:14.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5529" for this suite. Apr 23 14:33:20.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:33:20.758: INFO: namespace projected-5529 deletion completed in 6.08719021s • [SLOW TEST:10.275 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:33:20.759: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 23 14:33:20.870: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 23 14:33:20.877: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:20.897: INFO: Number of nodes with available pods: 0 Apr 23 14:33:20.897: INFO: Node iruya-worker is running more than one daemon pod Apr 23 14:33:21.903: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:21.907: INFO: Number of nodes with available pods: 0 Apr 23 14:33:21.907: INFO: Node iruya-worker is running more than one daemon pod Apr 23 14:33:22.902: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:22.906: INFO: Number of nodes with available pods: 0 Apr 23 14:33:22.906: INFO: Node iruya-worker is running more than one daemon pod Apr 23 14:33:23.902: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:23.906: INFO: Number of nodes with available pods: 0 Apr 23 14:33:23.906: INFO: Node iruya-worker is running more than one daemon pod Apr 23 14:33:24.902: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:24.906: INFO: Number of nodes with available pods: 2 Apr 23 14:33:24.906: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 23 14:33:24.954: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:24.954: INFO: Wrong image for pod: daemon-set-knz5q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:24.960: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:25.964: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:25.964: INFO: Wrong image for pod: daemon-set-knz5q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:25.968: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:26.970: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:26.970: INFO: Wrong image for pod: daemon-set-knz5q. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:26.970: INFO: Pod daemon-set-knz5q is not available Apr 23 14:33:26.990: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:28.058: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:28.058: INFO: Pod daemon-set-j7nfw is not available Apr 23 14:33:28.131: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:28.965: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:28.965: INFO: Pod daemon-set-j7nfw is not available Apr 23 14:33:28.970: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:29.964: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:29.964: INFO: Pod daemon-set-j7nfw is not available Apr 23 14:33:29.968: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:31.047: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:31.051: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:31.964: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:31.967: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:32.965: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:32.965: INFO: Pod daemon-set-dl9tz is not available Apr 23 14:33:32.970: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:33.964: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:33.964: INFO: Pod daemon-set-dl9tz is not available Apr 23 14:33:33.973: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:34.964: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:34.965: INFO: Pod daemon-set-dl9tz is not available Apr 23 14:33:34.968: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:35.964: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:35.964: INFO: Pod daemon-set-dl9tz is not available Apr 23 14:33:35.968: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:36.965: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:36.965: INFO: Pod daemon-set-dl9tz is not available Apr 23 14:33:36.974: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:37.966: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:37.966: INFO: Pod daemon-set-dl9tz is not available Apr 23 14:33:37.970: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:38.965: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:38.966: INFO: Pod daemon-set-dl9tz is not available Apr 23 14:33:38.969: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:39.965: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:39.965: INFO: Pod daemon-set-dl9tz is not available Apr 23 14:33:39.969: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:40.965: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:40.965: INFO: Pod daemon-set-dl9tz is not available Apr 23 14:33:40.969: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:41.965: INFO: Wrong image for pod: daemon-set-dl9tz. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 23 14:33:41.965: INFO: Pod daemon-set-dl9tz is not available Apr 23 14:33:41.969: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:42.965: INFO: Pod daemon-set-qsprc is not available Apr 23 14:33:42.969: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 23 14:33:42.973: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:42.976: INFO: Number of nodes with available pods: 1 Apr 23 14:33:42.976: INFO: Node iruya-worker is running more than one daemon pod Apr 23 14:33:44.003: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:44.006: INFO: Number of nodes with available pods: 1 Apr 23 14:33:44.006: INFO: Node iruya-worker is running more than one daemon pod Apr 23 14:33:44.981: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:44.985: INFO: Number of nodes with available pods: 1 Apr 23 14:33:44.985: INFO: Node iruya-worker is running more than one daemon pod Apr 23 14:33:45.982: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 23 14:33:45.985: INFO: Number of nodes with available pods: 2 Apr 23 14:33:45.985: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3837, will wait for the garbage collector to delete the pods Apr 23 14:33:46.061: INFO: Deleting DaemonSet.extensions daemon-set took: 6.459095ms Apr 23 14:33:46.361: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.448853ms Apr 23 14:33:52.265: INFO: Number of nodes with available pods: 0 Apr 23 14:33:52.265: INFO: Number of running nodes: 0, number of available pods: 0 Apr 23 14:33:52.267: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3837/daemonsets","resourceVersion":"7016443"},"items":null} Apr 23 14:33:52.269: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3837/pods","resourceVersion":"7016443"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:33:52.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3837" for this suite. Apr 23 14:33:58.294: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:33:58.372: INFO: namespace daemonsets-3837 deletion completed in 6.092829812s • [SLOW TEST:37.614 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:33:58.373: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Apr 23 14:33:58.456: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:33:58.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2344" for this suite. Apr 23 14:34:04.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:34:04.631: INFO: namespace kubectl-2344 deletion completed in 6.090547141s • [SLOW TEST:6.258 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:34:04.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 23 14:34:04.705: INFO: Waiting up to 5m0s for pod "pod-66bea8ce-a01b-4874-bc93-86805e4164e1" in namespace "emptydir-1051" to be "success or failure" Apr 23 14:34:04.721: INFO: Pod "pod-66bea8ce-a01b-4874-bc93-86805e4164e1": Phase="Pending", Reason="", readiness=false. Elapsed: 16.558148ms Apr 23 14:34:06.742: INFO: Pod "pod-66bea8ce-a01b-4874-bc93-86805e4164e1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036900526s Apr 23 14:34:08.746: INFO: Pod "pod-66bea8ce-a01b-4874-bc93-86805e4164e1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.041544294s STEP: Saw pod success Apr 23 14:34:08.746: INFO: Pod "pod-66bea8ce-a01b-4874-bc93-86805e4164e1" satisfied condition "success or failure" Apr 23 14:34:08.749: INFO: Trying to get logs from node iruya-worker pod pod-66bea8ce-a01b-4874-bc93-86805e4164e1 container test-container: STEP: delete the pod Apr 23 14:34:08.808: INFO: Waiting for pod pod-66bea8ce-a01b-4874-bc93-86805e4164e1 to disappear Apr 23 14:34:08.811: INFO: Pod pod-66bea8ce-a01b-4874-bc93-86805e4164e1 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:34:08.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1051" for this suite. Apr 23 14:34:14.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:34:14.896: INFO: namespace emptydir-1051 deletion completed in 6.082027714s • [SLOW TEST:10.265 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:34:14.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 23 14:34:14.987: INFO: Waiting up to 5m0s for pod "pod-b9fef9b8-bce1-47e8-898b-b124efb40fd2" in namespace "emptydir-6120" to be "success or failure" Apr 23 14:34:14.990: INFO: Pod "pod-b9fef9b8-bce1-47e8-898b-b124efb40fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.272158ms Apr 23 14:34:16.994: INFO: Pod "pod-b9fef9b8-bce1-47e8-898b-b124efb40fd2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006947301s Apr 23 14:34:18.998: INFO: Pod "pod-b9fef9b8-bce1-47e8-898b-b124efb40fd2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011299232s STEP: Saw pod success Apr 23 14:34:18.998: INFO: Pod "pod-b9fef9b8-bce1-47e8-898b-b124efb40fd2" satisfied condition "success or failure" Apr 23 14:34:19.001: INFO: Trying to get logs from node iruya-worker2 pod pod-b9fef9b8-bce1-47e8-898b-b124efb40fd2 container test-container: STEP: delete the pod Apr 23 14:34:19.037: INFO: Waiting for pod pod-b9fef9b8-bce1-47e8-898b-b124efb40fd2 to disappear Apr 23 14:34:19.056: INFO: Pod pod-b9fef9b8-bce1-47e8-898b-b124efb40fd2 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:34:19.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6120" for this suite. Apr 23 14:34:25.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:34:25.152: INFO: namespace emptydir-6120 deletion completed in 6.092688038s • [SLOW TEST:10.255 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:34:25.153: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Apr 23 14:34:25.200: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Apr 23 14:34:25.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1357' Apr 23 14:34:27.854: INFO: stderr: "" Apr 23 14:34:27.854: INFO: stdout: "service/redis-slave created\n" Apr 23 14:34:27.854: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Apr 23 14:34:27.854: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1357' Apr 23 14:34:28.122: INFO: stderr: "" Apr 23 14:34:28.122: INFO: stdout: "service/redis-master created\n" Apr 23 14:34:28.122: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 23 14:34:28.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1357' Apr 23 14:34:28.394: INFO: stderr: "" Apr 23 14:34:28.394: INFO: stdout: "service/frontend created\n" Apr 23 14:34:28.394: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Apr 23 14:34:28.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1357' Apr 23 14:34:28.639: INFO: stderr: "" Apr 23 14:34:28.639: INFO: stdout: "deployment.apps/frontend created\n" Apr 23 14:34:28.640: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 23 14:34:28.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1357' Apr 23 14:34:28.960: INFO: stderr: "" Apr 23 14:34:28.960: INFO: stdout: "deployment.apps/redis-master created\n" Apr 23 14:34:28.961: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Apr 23 14:34:28.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1357' Apr 23 14:34:29.226: INFO: stderr: "" Apr 23 14:34:29.226: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Apr 23 14:34:29.226: INFO: Waiting for all frontend pods to be Running. Apr 23 14:34:39.278: INFO: Waiting for frontend to serve content. Apr 23 14:34:39.296: INFO: Trying to add a new entry to the guestbook. Apr 23 14:34:39.312: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources Apr 23 14:34:39.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1357' Apr 23 14:34:39.554: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 14:34:39.554: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Apr 23 14:34:39.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1357' Apr 23 14:34:39.713: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 14:34:39.713: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 23 14:34:39.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1357' Apr 23 14:34:39.852: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 14:34:39.852: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 23 14:34:39.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1357' Apr 23 14:34:39.967: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 14:34:39.967: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 23 14:34:39.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1357' Apr 23 14:34:40.050: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 14:34:40.050: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 23 14:34:40.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1357' Apr 23 14:34:40.157: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 23 14:34:40.157: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:34:40.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1357" for this suite. Apr 23 14:35:18.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:35:18.349: INFO: namespace kubectl-1357 deletion completed in 38.177880339s • [SLOW TEST:53.196 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:35:18.351: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-65733407-ec6e-4284-afdd-364312772a02 STEP: Creating a pod to test consume configMaps Apr 23 14:35:18.432: INFO: Waiting up to 5m0s for pod "pod-configmaps-859faa0e-5410-4160-b81c-8d80701631fb" in namespace "configmap-1994" to be "success or failure" Apr 23 14:35:18.436: INFO: Pod "pod-configmaps-859faa0e-5410-4160-b81c-8d80701631fb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.710511ms Apr 23 14:35:21.012: INFO: Pod "pod-configmaps-859faa0e-5410-4160-b81c-8d80701631fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.580231815s Apr 23 14:35:23.017: INFO: Pod "pod-configmaps-859faa0e-5410-4160-b81c-8d80701631fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.584710988s STEP: Saw pod success Apr 23 14:35:23.017: INFO: Pod "pod-configmaps-859faa0e-5410-4160-b81c-8d80701631fb" satisfied condition "success or failure" Apr 23 14:35:23.020: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-859faa0e-5410-4160-b81c-8d80701631fb container configmap-volume-test: STEP: delete the pod Apr 23 14:35:23.056: INFO: Waiting for pod pod-configmaps-859faa0e-5410-4160-b81c-8d80701631fb to disappear Apr 23 14:35:23.070: INFO: Pod pod-configmaps-859faa0e-5410-4160-b81c-8d80701631fb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:35:23.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1994" for this suite. Apr 23 14:35:29.090: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:35:29.164: INFO: namespace configmap-1994 deletion completed in 6.090109627s • [SLOW TEST:10.814 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:35:29.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 23 14:35:29.264: INFO: Waiting up to 5m0s for pod "pod-e12b26ec-a98f-4999-908f-40e91a7904e7" in namespace "emptydir-3645" to be "success or failure" Apr 23 14:35:29.274: INFO: Pod "pod-e12b26ec-a98f-4999-908f-40e91a7904e7": Phase="Pending", Reason="", readiness=false. Elapsed: 9.524589ms Apr 23 14:35:31.277: INFO: Pod "pod-e12b26ec-a98f-4999-908f-40e91a7904e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013099003s Apr 23 14:35:33.281: INFO: Pod "pod-e12b26ec-a98f-4999-908f-40e91a7904e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016752289s STEP: Saw pod success Apr 23 14:35:33.281: INFO: Pod "pod-e12b26ec-a98f-4999-908f-40e91a7904e7" satisfied condition "success or failure" Apr 23 14:35:33.283: INFO: Trying to get logs from node iruya-worker2 pod pod-e12b26ec-a98f-4999-908f-40e91a7904e7 container test-container: STEP: delete the pod Apr 23 14:35:33.299: INFO: Waiting for pod pod-e12b26ec-a98f-4999-908f-40e91a7904e7 to disappear Apr 23 14:35:33.336: INFO: Pod pod-e12b26ec-a98f-4999-908f-40e91a7904e7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:35:33.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3645" for this suite. Apr 23 14:35:39.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:35:39.446: INFO: namespace emptydir-3645 deletion completed in 6.105679223s • [SLOW TEST:10.281 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:35:39.447: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 23 14:35:39.528: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8902,SelfLink:/api/v1/namespaces/watch-8902/configmaps/e2e-watch-test-label-changed,UID:225380cb-a81c-423f-9100-a9d0845cfea8,ResourceVersion:7016977,Generation:0,CreationTimestamp:2020-04-23 14:35:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 23 14:35:39.528: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8902,SelfLink:/api/v1/namespaces/watch-8902/configmaps/e2e-watch-test-label-changed,UID:225380cb-a81c-423f-9100-a9d0845cfea8,ResourceVersion:7016978,Generation:0,CreationTimestamp:2020-04-23 14:35:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 23 14:35:39.528: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8902,SelfLink:/api/v1/namespaces/watch-8902/configmaps/e2e-watch-test-label-changed,UID:225380cb-a81c-423f-9100-a9d0845cfea8,ResourceVersion:7016979,Generation:0,CreationTimestamp:2020-04-23 14:35:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 23 14:35:49.631: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8902,SelfLink:/api/v1/namespaces/watch-8902/configmaps/e2e-watch-test-label-changed,UID:225380cb-a81c-423f-9100-a9d0845cfea8,ResourceVersion:7017000,Generation:0,CreationTimestamp:2020-04-23 14:35:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 23 14:35:49.631: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8902,SelfLink:/api/v1/namespaces/watch-8902/configmaps/e2e-watch-test-label-changed,UID:225380cb-a81c-423f-9100-a9d0845cfea8,ResourceVersion:7017001,Generation:0,CreationTimestamp:2020-04-23 14:35:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 23 14:35:49.632: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-8902,SelfLink:/api/v1/namespaces/watch-8902/configmaps/e2e-watch-test-label-changed,UID:225380cb-a81c-423f-9100-a9d0845cfea8,ResourceVersion:7017002,Generation:0,CreationTimestamp:2020-04-23 14:35:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:35:49.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8902" for this suite. Apr 23 14:35:55.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:35:55.721: INFO: namespace watch-8902 deletion completed in 6.086256031s • [SLOW TEST:16.274 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:35:55.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-26206ded-580c-4616-9ba1-3609ec8473b7 STEP: Creating a pod to test consume configMaps Apr 23 14:35:55.829: INFO: Waiting up to 5m0s for pod "pod-configmaps-d3de5c0e-c857-4d05-8921-0ab6aa446385" in namespace "configmap-1452" to be "success or failure" Apr 23 14:35:55.858: INFO: Pod "pod-configmaps-d3de5c0e-c857-4d05-8921-0ab6aa446385": Phase="Pending", Reason="", readiness=false. Elapsed: 28.383357ms Apr 23 14:35:57.861: INFO: Pod "pod-configmaps-d3de5c0e-c857-4d05-8921-0ab6aa446385": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031918216s Apr 23 14:35:59.864: INFO: Pod "pod-configmaps-d3de5c0e-c857-4d05-8921-0ab6aa446385": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.034871767s STEP: Saw pod success Apr 23 14:35:59.864: INFO: Pod "pod-configmaps-d3de5c0e-c857-4d05-8921-0ab6aa446385" satisfied condition "success or failure" Apr 23 14:35:59.866: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-d3de5c0e-c857-4d05-8921-0ab6aa446385 container configmap-volume-test: STEP: delete the pod Apr 23 14:35:59.925: INFO: Waiting for pod pod-configmaps-d3de5c0e-c857-4d05-8921-0ab6aa446385 to disappear Apr 23 14:36:00.000: INFO: Pod pod-configmaps-d3de5c0e-c857-4d05-8921-0ab6aa446385 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:36:00.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1452" for this suite. Apr 23 14:36:06.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:36:06.143: INFO: namespace configmap-1452 deletion completed in 6.13931048s • [SLOW TEST:10.422 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:36:06.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 23 14:36:06.245: INFO: Waiting up to 5m0s for pod "pod-22ca4170-8043-425b-955d-73a4b7c4b45a" in namespace "emptydir-7144" to be "success or failure" Apr 23 14:36:06.251: INFO: Pod "pod-22ca4170-8043-425b-955d-73a4b7c4b45a": Phase="Pending", Reason="", readiness=false. Elapsed: 5.916756ms Apr 23 14:36:08.300: INFO: Pod "pod-22ca4170-8043-425b-955d-73a4b7c4b45a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054582681s Apr 23 14:36:10.305: INFO: Pod "pod-22ca4170-8043-425b-955d-73a4b7c4b45a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.059209357s STEP: Saw pod success Apr 23 14:36:10.305: INFO: Pod "pod-22ca4170-8043-425b-955d-73a4b7c4b45a" satisfied condition "success or failure" Apr 23 14:36:10.308: INFO: Trying to get logs from node iruya-worker2 pod pod-22ca4170-8043-425b-955d-73a4b7c4b45a container test-container: STEP: delete the pod Apr 23 14:36:10.358: INFO: Waiting for pod pod-22ca4170-8043-425b-955d-73a4b7c4b45a to disappear Apr 23 14:36:10.370: INFO: Pod pod-22ca4170-8043-425b-955d-73a4b7c4b45a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:36:10.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7144" for this suite. Apr 23 14:36:16.386: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:36:16.468: INFO: namespace emptydir-7144 deletion completed in 6.094733716s • [SLOW TEST:10.324 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:36:16.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:36:16.515: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-444" for this suite. Apr 23 14:36:22.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:36:22.674: INFO: namespace services-444 deletion completed in 6.156607943s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.206 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 23 14:36:22.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-429ec3c7-b217-4842-87db-1947ea499c28 STEP: Creating a pod to test consume secrets Apr 23 14:36:22.743: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f71b4c0f-80af-44ee-91c8-e3ed425eb101" in namespace "projected-4269" to be "success or failure" Apr 23 14:36:22.791: INFO: Pod "pod-projected-secrets-f71b4c0f-80af-44ee-91c8-e3ed425eb101": Phase="Pending", Reason="", readiness=false. Elapsed: 48.444172ms Apr 23 14:36:24.795: INFO: Pod "pod-projected-secrets-f71b4c0f-80af-44ee-91c8-e3ed425eb101": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052153658s Apr 23 14:36:26.799: INFO: Pod "pod-projected-secrets-f71b4c0f-80af-44ee-91c8-e3ed425eb101": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056379459s STEP: Saw pod success Apr 23 14:36:26.799: INFO: Pod "pod-projected-secrets-f71b4c0f-80af-44ee-91c8-e3ed425eb101" satisfied condition "success or failure" Apr 23 14:36:26.802: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-f71b4c0f-80af-44ee-91c8-e3ed425eb101 container projected-secret-volume-test: STEP: delete the pod Apr 23 14:36:26.920: INFO: Waiting for pod pod-projected-secrets-f71b4c0f-80af-44ee-91c8-e3ed425eb101 to disappear Apr 23 14:36:26.922: INFO: Pod pod-projected-secrets-f71b4c0f-80af-44ee-91c8-e3ed425eb101 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 23 14:36:26.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4269" for this suite. Apr 23 14:36:32.937: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 23 14:36:33.016: INFO: namespace projected-4269 deletion completed in 6.090743114s • [SLOW TEST:10.342 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSApr 23 14:36:33.017: INFO: Running AfterSuite actions on all nodes Apr 23 14:36:33.017: INFO: Running AfterSuite actions on node 1 Apr 23 14:36:33.017: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6048.948 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS