I0403 12:55:42.044413 6 e2e.go:243] Starting e2e run "c7dabd96-ad39-4dda-bda7-5cccc1631f6b" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1585918541 - Will randomize all specs Will run 215 of 4412 specs Apr 3 12:55:42.231: INFO: >>> kubeConfig: /root/.kube/config Apr 3 12:55:42.235: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Apr 3 12:55:42.265: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Apr 3 12:55:42.291: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Apr 3 12:55:42.291: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Apr 3 12:55:42.291: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Apr 3 12:55:42.303: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) Apr 3 12:55:42.303: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Apr 3 12:55:42.303: INFO: e2e test version: v1.15.10 Apr 3 12:55:42.304: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 12:55:42.305: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns Apr 3 12:55:42.343: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-446.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-446.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-446.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-446.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 12:55:48.424: INFO: DNS probes using dns-test-38a8f52c-94d5-44ac-aa11-488665655cff succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-446.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-446.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-446.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-446.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 12:55:54.829: INFO: File wheezy_udp@dns-test-service-3.dns-446.svc.cluster.local from pod dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 12:55:54.832: INFO: File jessie_udp@dns-test-service-3.dns-446.svc.cluster.local from pod dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 12:55:54.832: INFO: Lookups using dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b failed for: [wheezy_udp@dns-test-service-3.dns-446.svc.cluster.local jessie_udp@dns-test-service-3.dns-446.svc.cluster.local] Apr 3 12:55:59.837: INFO: File wheezy_udp@dns-test-service-3.dns-446.svc.cluster.local from pod dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 12:55:59.841: INFO: File jessie_udp@dns-test-service-3.dns-446.svc.cluster.local from pod dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 12:55:59.841: INFO: Lookups using dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b failed for: [wheezy_udp@dns-test-service-3.dns-446.svc.cluster.local jessie_udp@dns-test-service-3.dns-446.svc.cluster.local] Apr 3 12:56:04.837: INFO: File wheezy_udp@dns-test-service-3.dns-446.svc.cluster.local from pod dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 12:56:04.841: INFO: File jessie_udp@dns-test-service-3.dns-446.svc.cluster.local from pod dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 12:56:04.841: INFO: Lookups using dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b failed for: [wheezy_udp@dns-test-service-3.dns-446.svc.cluster.local jessie_udp@dns-test-service-3.dns-446.svc.cluster.local] Apr 3 12:56:09.838: INFO: File wheezy_udp@dns-test-service-3.dns-446.svc.cluster.local from pod dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 12:56:09.842: INFO: File jessie_udp@dns-test-service-3.dns-446.svc.cluster.local from pod dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 12:56:09.842: INFO: Lookups using dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b failed for: [wheezy_udp@dns-test-service-3.dns-446.svc.cluster.local jessie_udp@dns-test-service-3.dns-446.svc.cluster.local] Apr 3 12:56:14.838: INFO: File wheezy_udp@dns-test-service-3.dns-446.svc.cluster.local from pod dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 12:56:14.842: INFO: File jessie_udp@dns-test-service-3.dns-446.svc.cluster.local from pod dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b contains 'foo.example.com. ' instead of 'bar.example.com.' Apr 3 12:56:14.842: INFO: Lookups using dns-446/dns-test-3c937256-25f7-473c-aea2-5c2032fa412b failed for: [wheezy_udp@dns-test-service-3.dns-446.svc.cluster.local jessie_udp@dns-test-service-3.dns-446.svc.cluster.local] Apr 3 12:56:19.846: INFO: DNS probes using dns-test-3c937256-25f7-473c-aea2-5c2032fa412b succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-446.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-446.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-446.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-446.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 12:56:26.213: INFO: DNS probes using dns-test-9710281b-c06a-4d61-b1bf-ab90124dc366 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 12:56:26.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-446" for this suite. Apr 3 12:56:32.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 12:56:32.382: INFO: namespace dns-446 deletion completed in 6.100200908s • [SLOW TEST:50.077 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 12:56:32.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 3 12:56:32.460: INFO: Waiting up to 5m0s for pod "pod-1c469310-16f2-43fc-a966-dc399f06c83d" in namespace "emptydir-2950" to be "success or failure" Apr 3 12:56:32.464: INFO: Pod "pod-1c469310-16f2-43fc-a966-dc399f06c83d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.98445ms Apr 3 12:56:34.468: INFO: Pod "pod-1c469310-16f2-43fc-a966-dc399f06c83d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008015931s Apr 3 12:56:36.473: INFO: Pod "pod-1c469310-16f2-43fc-a966-dc399f06c83d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012697238s STEP: Saw pod success Apr 3 12:56:36.473: INFO: Pod "pod-1c469310-16f2-43fc-a966-dc399f06c83d" satisfied condition "success or failure" Apr 3 12:56:36.476: INFO: Trying to get logs from node iruya-worker2 pod pod-1c469310-16f2-43fc-a966-dc399f06c83d container test-container: STEP: delete the pod Apr 3 12:56:36.505: INFO: Waiting for pod pod-1c469310-16f2-43fc-a966-dc399f06c83d to disappear Apr 3 12:56:36.518: INFO: Pod pod-1c469310-16f2-43fc-a966-dc399f06c83d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 12:56:36.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2950" for this suite. Apr 3 12:56:42.534: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 12:56:42.637: INFO: namespace emptydir-2950 deletion completed in 6.115212933s • [SLOW TEST:10.255 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 12:56:42.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-c71b2684-9c4e-4a78-af68-66ce05825966 STEP: Creating a pod to test consume secrets Apr 3 12:56:42.736: INFO: Waiting up to 5m0s for pod "pod-secrets-5d5772bd-38a3-40e5-89e5-6374749dc2e6" in namespace "secrets-7865" to be "success or failure" Apr 3 12:56:42.740: INFO: Pod "pod-secrets-5d5772bd-38a3-40e5-89e5-6374749dc2e6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.974477ms Apr 3 12:56:44.744: INFO: Pod "pod-secrets-5d5772bd-38a3-40e5-89e5-6374749dc2e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007734868s Apr 3 12:56:46.749: INFO: Pod "pod-secrets-5d5772bd-38a3-40e5-89e5-6374749dc2e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012302603s STEP: Saw pod success Apr 3 12:56:46.749: INFO: Pod "pod-secrets-5d5772bd-38a3-40e5-89e5-6374749dc2e6" satisfied condition "success or failure" Apr 3 12:56:46.752: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-5d5772bd-38a3-40e5-89e5-6374749dc2e6 container secret-volume-test: STEP: delete the pod Apr 3 12:56:46.772: INFO: Waiting for pod pod-secrets-5d5772bd-38a3-40e5-89e5-6374749dc2e6 to disappear Apr 3 12:56:46.776: INFO: Pod pod-secrets-5d5772bd-38a3-40e5-89e5-6374749dc2e6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 12:56:46.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7865" for this suite. Apr 3 12:56:52.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 12:56:52.880: INFO: namespace secrets-7865 deletion completed in 6.099959294s • [SLOW TEST:10.242 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 12:56:52.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 3 12:56:52.935: INFO: Waiting up to 5m0s for pod "pod-5b735f2a-0007-47b5-95c0-0d72f68a51e7" in namespace "emptydir-6269" to be "success or failure" Apr 3 12:56:52.939: INFO: Pod "pod-5b735f2a-0007-47b5-95c0-0d72f68a51e7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.243676ms Apr 3 12:56:54.942: INFO: Pod "pod-5b735f2a-0007-47b5-95c0-0d72f68a51e7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00784854s Apr 3 12:56:56.947: INFO: Pod "pod-5b735f2a-0007-47b5-95c0-0d72f68a51e7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012127637s STEP: Saw pod success Apr 3 12:56:56.947: INFO: Pod "pod-5b735f2a-0007-47b5-95c0-0d72f68a51e7" satisfied condition "success or failure" Apr 3 12:56:56.950: INFO: Trying to get logs from node iruya-worker pod pod-5b735f2a-0007-47b5-95c0-0d72f68a51e7 container test-container: STEP: delete the pod Apr 3 12:56:57.010: INFO: Waiting for pod pod-5b735f2a-0007-47b5-95c0-0d72f68a51e7 to disappear Apr 3 12:56:57.016: INFO: Pod pod-5b735f2a-0007-47b5-95c0-0d72f68a51e7 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 12:56:57.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6269" for this suite. Apr 3 12:57:03.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 12:57:03.115: INFO: namespace emptydir-6269 deletion completed in 6.095117789s • [SLOW TEST:10.235 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 12:57:03.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 3 12:57:07.281: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 12:57:07.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6374" for this suite. Apr 3 12:57:13.367: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 12:57:13.452: INFO: namespace container-runtime-6374 deletion completed in 6.096973322s • [SLOW TEST:10.337 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 12:57:13.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 12:57:39.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-5344" for this suite. Apr 3 12:57:45.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 12:57:45.728: INFO: namespace namespaces-5344 deletion completed in 6.091075543s STEP: Destroying namespace "nsdeletetest-6110" for this suite. Apr 3 12:57:45.731: INFO: Namespace nsdeletetest-6110 was already deleted STEP: Destroying namespace "nsdeletetest-528" for this suite. Apr 3 12:57:51.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 12:57:51.824: INFO: namespace nsdeletetest-528 deletion completed in 6.093596845s • [SLOW TEST:38.372 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 12:57:51.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 3 12:57:51.993: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 12:57:59.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-641" for this suite. Apr 3 12:58:21.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 12:58:21.338: INFO: namespace init-container-641 deletion completed in 22.095724791s • [SLOW TEST:29.513 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 12:58:21.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0403 12:58:32.725267 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 3 12:58:32.725: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 12:58:32.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9589" for this suite. Apr 3 12:58:40.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 12:58:40.815: INFO: namespace gc-9589 deletion completed in 8.087350498s • [SLOW TEST:19.476 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 12:58:40.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 3 12:58:40.895: INFO: Waiting up to 5m0s for pod "downward-api-00156def-ab42-4385-ad9b-a3200fc0a578" in namespace "downward-api-2870" to be "success or failure" Apr 3 12:58:40.898: INFO: Pod "downward-api-00156def-ab42-4385-ad9b-a3200fc0a578": Phase="Pending", Reason="", readiness=false. Elapsed: 3.864659ms Apr 3 12:58:42.903: INFO: Pod "downward-api-00156def-ab42-4385-ad9b-a3200fc0a578": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008132451s Apr 3 12:58:44.906: INFO: Pod "downward-api-00156def-ab42-4385-ad9b-a3200fc0a578": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011492875s STEP: Saw pod success Apr 3 12:58:44.906: INFO: Pod "downward-api-00156def-ab42-4385-ad9b-a3200fc0a578" satisfied condition "success or failure" Apr 3 12:58:44.908: INFO: Trying to get logs from node iruya-worker pod downward-api-00156def-ab42-4385-ad9b-a3200fc0a578 container dapi-container: STEP: delete the pod Apr 3 12:58:44.924: INFO: Waiting for pod downward-api-00156def-ab42-4385-ad9b-a3200fc0a578 to disappear Apr 3 12:58:44.929: INFO: Pod downward-api-00156def-ab42-4385-ad9b-a3200fc0a578 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 12:58:44.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2870" for this suite. Apr 3 12:58:50.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 12:58:51.035: INFO: namespace downward-api-2870 deletion completed in 6.103507987s • [SLOW TEST:10.220 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 12:58:51.035: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin Apr 3 12:58:51.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-423 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Apr 3 12:58:56.810: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0403 12:58:56.740803 36 log.go:172] (0xc00099e370) (0xc000ab08c0) Create stream\nI0403 12:58:56.740858 36 log.go:172] (0xc00099e370) (0xc000ab08c0) Stream added, broadcasting: 1\nI0403 12:58:56.745313 36 log.go:172] (0xc00099e370) Reply frame received for 1\nI0403 12:58:56.745646 36 log.go:172] (0xc00099e370) (0xc00051e0a0) Create stream\nI0403 12:58:56.745661 36 log.go:172] (0xc00099e370) (0xc00051e0a0) Stream added, broadcasting: 3\nI0403 12:58:56.746603 36 log.go:172] (0xc00099e370) Reply frame received for 3\nI0403 12:58:56.746636 36 log.go:172] (0xc00099e370) (0xc000575900) Create stream\nI0403 12:58:56.746650 36 log.go:172] (0xc00099e370) (0xc000575900) Stream added, broadcasting: 5\nI0403 12:58:56.747367 36 log.go:172] (0xc00099e370) Reply frame received for 5\nI0403 12:58:56.747402 36 log.go:172] (0xc00099e370) (0xc00051e140) Create stream\nI0403 12:58:56.747419 36 log.go:172] (0xc00099e370) (0xc00051e140) Stream added, broadcasting: 7\nI0403 12:58:56.748279 36 log.go:172] (0xc00099e370) Reply frame received for 7\nI0403 12:58:56.748417 36 log.go:172] (0xc00051e0a0) (3) Writing data frame\nI0403 12:58:56.748549 36 log.go:172] (0xc00051e0a0) (3) Writing data frame\nI0403 12:58:56.749613 36 log.go:172] (0xc00099e370) Data frame received for 5\nI0403 12:58:56.749756 36 log.go:172] (0xc000575900) (5) Data frame handling\nI0403 12:58:56.749799 36 log.go:172] (0xc000575900) (5) Data frame sent\nI0403 12:58:56.749998 36 log.go:172] (0xc00099e370) Data frame received for 5\nI0403 12:58:56.750012 36 log.go:172] (0xc000575900) (5) Data frame handling\nI0403 12:58:56.750021 36 log.go:172] (0xc000575900) (5) Data frame sent\nI0403 12:58:56.787708 36 log.go:172] (0xc00099e370) Data frame received for 5\nI0403 12:58:56.787746 36 log.go:172] (0xc00099e370) Data frame received for 7\nI0403 12:58:56.787784 36 log.go:172] (0xc00051e140) (7) Data frame handling\nI0403 12:58:56.787824 36 log.go:172] (0xc000575900) (5) Data frame handling\nI0403 12:58:56.788028 36 log.go:172] (0xc00099e370) Data frame received for 1\nI0403 12:58:56.788060 36 log.go:172] (0xc00099e370) (0xc00051e0a0) Stream removed, broadcasting: 3\nI0403 12:58:56.788112 36 log.go:172] (0xc000ab08c0) (1) Data frame handling\nI0403 12:58:56.788139 36 log.go:172] (0xc000ab08c0) (1) Data frame sent\nI0403 12:58:56.788159 36 log.go:172] (0xc00099e370) (0xc000ab08c0) Stream removed, broadcasting: 1\nI0403 12:58:56.788182 36 log.go:172] (0xc00099e370) Go away received\nI0403 12:58:56.788299 36 log.go:172] (0xc00099e370) (0xc000ab08c0) Stream removed, broadcasting: 1\nI0403 12:58:56.788326 36 log.go:172] (0xc00099e370) (0xc00051e0a0) Stream removed, broadcasting: 3\nI0403 12:58:56.788336 36 log.go:172] (0xc00099e370) (0xc000575900) Stream removed, broadcasting: 5\nI0403 12:58:56.788345 36 log.go:172] (0xc00099e370) (0xc00051e140) Stream removed, broadcasting: 7\n" Apr 3 12:58:56.810: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 12:58:58.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-423" for this suite. Apr 3 12:59:04.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 12:59:04.911: INFO: namespace kubectl-423 deletion completed in 6.090124447s • [SLOW TEST:13.876 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 12:59:04.912: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 3 12:59:09.034: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 12:59:09.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7820" for this suite. Apr 3 12:59:15.160: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 12:59:15.241: INFO: namespace container-runtime-7820 deletion completed in 6.092853907s • [SLOW TEST:10.329 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 12:59:15.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 12:59:15.292: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 12:59:19.464: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5218" for this suite. Apr 3 13:00:03.483: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:00:03.563: INFO: namespace pods-5218 deletion completed in 44.095631313s • [SLOW TEST:48.322 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:00:03.564: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 3 13:00:11.672: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 3 13:00:11.690: INFO: Pod pod-with-prestop-http-hook still exists Apr 3 13:00:13.691: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 3 13:00:13.695: INFO: Pod pod-with-prestop-http-hook still exists Apr 3 13:00:15.691: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 3 13:00:15.695: INFO: Pod pod-with-prestop-http-hook still exists Apr 3 13:00:17.691: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 3 13:00:17.695: INFO: Pod pod-with-prestop-http-hook still exists Apr 3 13:00:19.691: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 3 13:00:19.695: INFO: Pod pod-with-prestop-http-hook still exists Apr 3 13:00:21.691: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 3 13:00:21.694: INFO: Pod pod-with-prestop-http-hook still exists Apr 3 13:00:23.691: INFO: Waiting for pod pod-with-prestop-http-hook to disappear Apr 3 13:00:23.695: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:00:23.703: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2671" for this suite. Apr 3 13:00:45.723: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:00:45.802: INFO: namespace container-lifecycle-hook-2671 deletion completed in 22.095455211s • [SLOW TEST:42.239 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:00:45.804: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 3 13:00:45.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1985' Apr 3 13:00:46.182: INFO: stderr: "" Apr 3 13:00:46.182: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 3 13:00:46.182: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1985' Apr 3 13:00:46.281: INFO: stderr: "" Apr 3 13:00:46.281: INFO: stdout: "update-demo-nautilus-2cmts update-demo-nautilus-l9hwk " Apr 3 13:00:46.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2cmts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1985' Apr 3 13:00:46.376: INFO: stderr: "" Apr 3 13:00:46.376: INFO: stdout: "" Apr 3 13:00:46.376: INFO: update-demo-nautilus-2cmts is created but not running Apr 3 13:00:51.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1985' Apr 3 13:00:51.475: INFO: stderr: "" Apr 3 13:00:51.475: INFO: stdout: "update-demo-nautilus-2cmts update-demo-nautilus-l9hwk " Apr 3 13:00:51.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2cmts -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1985' Apr 3 13:00:51.572: INFO: stderr: "" Apr 3 13:00:51.572: INFO: stdout: "true" Apr 3 13:00:51.572: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2cmts -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1985' Apr 3 13:00:51.663: INFO: stderr: "" Apr 3 13:00:51.663: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 13:00:51.663: INFO: validating pod update-demo-nautilus-2cmts Apr 3 13:00:51.666: INFO: got data: { "image": "nautilus.jpg" } Apr 3 13:00:51.666: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 13:00:51.667: INFO: update-demo-nautilus-2cmts is verified up and running Apr 3 13:00:51.667: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l9hwk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1985' Apr 3 13:00:51.753: INFO: stderr: "" Apr 3 13:00:51.753: INFO: stdout: "true" Apr 3 13:00:51.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-l9hwk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1985' Apr 3 13:00:51.855: INFO: stderr: "" Apr 3 13:00:51.855: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 13:00:51.855: INFO: validating pod update-demo-nautilus-l9hwk Apr 3 13:00:51.859: INFO: got data: { "image": "nautilus.jpg" } Apr 3 13:00:51.859: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 13:00:51.859: INFO: update-demo-nautilus-l9hwk is verified up and running STEP: using delete to clean up resources Apr 3 13:00:51.859: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1985' Apr 3 13:00:51.966: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 13:00:51.966: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 3 13:00:51.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1985' Apr 3 13:00:52.073: INFO: stderr: "No resources found.\n" Apr 3 13:00:52.073: INFO: stdout: "" Apr 3 13:00:52.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1985 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 3 13:00:52.195: INFO: stderr: "" Apr 3 13:00:52.195: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:00:52.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1985" for this suite. Apr 3 13:01:14.215: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:01:14.292: INFO: namespace kubectl-1985 deletion completed in 22.091713408s • [SLOW TEST:28.488 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:01:14.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 13:01:40.399: INFO: Container started at 2020-04-03 13:01:16 +0000 UTC, pod became ready at 2020-04-03 13:01:38 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:01:40.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3005" for this suite. Apr 3 13:02:02.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:02:02.496: INFO: namespace container-probe-3005 deletion completed in 22.092617532s • [SLOW TEST:48.203 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:02:02.496: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 13:02:02.584: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11953c37-dab7-42a0-9e9d-c083c5b2ab61" in namespace "projected-8615" to be "success or failure" Apr 3 13:02:02.587: INFO: Pod "downwardapi-volume-11953c37-dab7-42a0-9e9d-c083c5b2ab61": Phase="Pending", Reason="", readiness=false. Elapsed: 3.096078ms Apr 3 13:02:04.596: INFO: Pod "downwardapi-volume-11953c37-dab7-42a0-9e9d-c083c5b2ab61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012470397s Apr 3 13:02:06.600: INFO: Pod "downwardapi-volume-11953c37-dab7-42a0-9e9d-c083c5b2ab61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016780602s STEP: Saw pod success Apr 3 13:02:06.601: INFO: Pod "downwardapi-volume-11953c37-dab7-42a0-9e9d-c083c5b2ab61" satisfied condition "success or failure" Apr 3 13:02:06.604: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-11953c37-dab7-42a0-9e9d-c083c5b2ab61 container client-container: STEP: delete the pod Apr 3 13:02:06.621: INFO: Waiting for pod downwardapi-volume-11953c37-dab7-42a0-9e9d-c083c5b2ab61 to disappear Apr 3 13:02:06.626: INFO: Pod downwardapi-volume-11953c37-dab7-42a0-9e9d-c083c5b2ab61 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:02:06.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8615" for this suite. Apr 3 13:02:12.657: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:02:12.753: INFO: namespace projected-8615 deletion completed in 6.124946033s • [SLOW TEST:10.257 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:02:12.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1696.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1696.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1696.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1696.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1696.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1696.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1696.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 138.94.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.94.138_udp@PTR;check="$$(dig +tcp +noall +answer +search 138.94.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.94.138_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1696.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1696.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1696.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1696.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1696.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1696.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1696.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1696.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 138.94.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.94.138_udp@PTR;check="$$(dig +tcp +noall +answer +search 138.94.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.94.138_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 13:02:18.915: INFO: Unable to read wheezy_udp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:18.919: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:18.922: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:18.925: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:18.947: INFO: Unable to read jessie_udp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:18.950: INFO: Unable to read jessie_tcp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:18.953: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:18.956: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:18.974: INFO: Lookups using dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba failed for: [wheezy_udp@dns-test-service.dns-1696.svc.cluster.local wheezy_tcp@dns-test-service.dns-1696.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local jessie_udp@dns-test-service.dns-1696.svc.cluster.local jessie_tcp@dns-test-service.dns-1696.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local] Apr 3 13:02:23.979: INFO: Unable to read wheezy_udp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:23.983: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:23.986: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:23.990: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:24.011: INFO: Unable to read jessie_udp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:24.014: INFO: Unable to read jessie_tcp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:24.017: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:24.020: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:24.038: INFO: Lookups using dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba failed for: [wheezy_udp@dns-test-service.dns-1696.svc.cluster.local wheezy_tcp@dns-test-service.dns-1696.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local jessie_udp@dns-test-service.dns-1696.svc.cluster.local jessie_tcp@dns-test-service.dns-1696.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local] Apr 3 13:02:28.980: INFO: Unable to read wheezy_udp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:28.984: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:28.987: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:28.991: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:29.014: INFO: Unable to read jessie_udp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:29.017: INFO: Unable to read jessie_tcp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:29.021: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:29.024: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:29.044: INFO: Lookups using dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba failed for: [wheezy_udp@dns-test-service.dns-1696.svc.cluster.local wheezy_tcp@dns-test-service.dns-1696.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local jessie_udp@dns-test-service.dns-1696.svc.cluster.local jessie_tcp@dns-test-service.dns-1696.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local] Apr 3 13:02:33.984: INFO: Unable to read wheezy_udp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:33.987: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:33.989: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:33.991: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:34.009: INFO: Unable to read jessie_udp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:34.012: INFO: Unable to read jessie_tcp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:34.015: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:34.018: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:34.037: INFO: Lookups using dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba failed for: [wheezy_udp@dns-test-service.dns-1696.svc.cluster.local wheezy_tcp@dns-test-service.dns-1696.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local jessie_udp@dns-test-service.dns-1696.svc.cluster.local jessie_tcp@dns-test-service.dns-1696.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local] Apr 3 13:02:38.980: INFO: Unable to read wheezy_udp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:38.984: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:38.987: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:38.990: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:39.014: INFO: Unable to read jessie_udp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:39.016: INFO: Unable to read jessie_tcp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:39.020: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:39.023: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:39.042: INFO: Lookups using dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba failed for: [wheezy_udp@dns-test-service.dns-1696.svc.cluster.local wheezy_tcp@dns-test-service.dns-1696.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local jessie_udp@dns-test-service.dns-1696.svc.cluster.local jessie_tcp@dns-test-service.dns-1696.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local] Apr 3 13:02:43.980: INFO: Unable to read wheezy_udp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:43.984: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:43.988: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:43.991: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:44.013: INFO: Unable to read jessie_udp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:44.016: INFO: Unable to read jessie_tcp@dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:44.018: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:44.021: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local from pod dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba: the server could not find the requested resource (get pods dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba) Apr 3 13:02:44.041: INFO: Lookups using dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba failed for: [wheezy_udp@dns-test-service.dns-1696.svc.cluster.local wheezy_tcp@dns-test-service.dns-1696.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local jessie_udp@dns-test-service.dns-1696.svc.cluster.local jessie_tcp@dns-test-service.dns-1696.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-1696.svc.cluster.local] Apr 3 13:02:49.044: INFO: DNS probes using dns-1696/dns-test-53832d49-31b1-4872-8d7a-cf15c897b3ba succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:02:49.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1696" for this suite. Apr 3 13:02:55.696: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:02:55.786: INFO: namespace dns-1696 deletion completed in 6.134192783s • [SLOW TEST:43.031 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:02:55.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 3 13:02:59.958: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:02:59.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2141" for this suite. Apr 3 13:03:06.010: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:03:06.092: INFO: namespace container-runtime-2141 deletion completed in 6.093477272s • [SLOW TEST:10.305 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:03:06.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1834 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 3 13:03:06.192: INFO: Found 0 stateful pods, waiting for 3 Apr 3 13:03:16.198: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 3 13:03:16.198: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 3 13:03:16.198: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 3 13:03:26.198: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 3 13:03:26.198: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 3 13:03:26.198: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 3 13:03:26.226: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Apr 3 13:03:36.264: INFO: Updating stateful set ss2 Apr 3 13:03:36.313: INFO: Waiting for Pod statefulset-1834/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted Apr 3 13:03:46.492: INFO: Found 2 stateful pods, waiting for 3 Apr 3 13:03:56.497: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 3 13:03:56.497: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 3 13:03:56.497: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Apr 3 13:03:56.521: INFO: Updating stateful set ss2 Apr 3 13:03:56.532: INFO: Waiting for Pod statefulset-1834/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 3 13:04:06.558: INFO: Updating stateful set ss2 Apr 3 13:04:06.593: INFO: Waiting for StatefulSet statefulset-1834/ss2 to complete update Apr 3 13:04:06.593: INFO: Waiting for Pod statefulset-1834/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 3 13:04:16.602: INFO: Deleting all statefulset in ns statefulset-1834 Apr 3 13:04:16.605: INFO: Scaling statefulset ss2 to 0 Apr 3 13:04:36.623: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 13:04:36.627: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:04:36.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1834" for this suite. Apr 3 13:04:42.656: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:04:42.740: INFO: namespace statefulset-1834 deletion completed in 6.094484812s • [SLOW TEST:96.648 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:04:42.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 13:04:42.808: INFO: Pod name cleanup-pod: Found 0 pods out of 1 Apr 3 13:04:47.813: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 3 13:04:47.813: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 3 13:04:47.853: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5321,SelfLink:/apis/apps/v1/namespaces/deployment-5321/deployments/test-cleanup-deployment,UID:8cc36c93-ee9b-49c2-bd6f-7a17b992150d,ResourceVersion:3391377,Generation:1,CreationTimestamp:2020-04-03 13:04:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} Apr 3 13:04:47.860: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-5321,SelfLink:/apis/apps/v1/namespaces/deployment-5321/replicasets/test-cleanup-deployment-55bbcbc84c,UID:c7c790a2-f652-4b76-b30d-c9b2cce15670,ResourceVersion:3391379,Generation:1,CreationTimestamp:2020-04-03 13:04:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 8cc36c93-ee9b-49c2-bd6f-7a17b992150d 0xc00283a867 0xc00283a868}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 3 13:04:47.860: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": Apr 3 13:04:47.860: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-5321,SelfLink:/apis/apps/v1/namespaces/deployment-5321/replicasets/test-cleanup-controller,UID:fa6be12a-1cd3-441b-b23f-905b22e5dfe0,ResourceVersion:3391378,Generation:1,CreationTimestamp:2020-04-03 13:04:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment 8cc36c93-ee9b-49c2-bd6f-7a17b992150d 0xc00283a797 0xc00283a798}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 3 13:04:47.908: INFO: Pod "test-cleanup-controller-tbln6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-tbln6,GenerateName:test-cleanup-controller-,Namespace:deployment-5321,SelfLink:/api/v1/namespaces/deployment-5321/pods/test-cleanup-controller-tbln6,UID:b9bfee7e-325f-4f30-b8b6-b2c5ac0489ea,ResourceVersion:3391370,Generation:0,CreationTimestamp:2020-04-03 13:04:42 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller fa6be12a-1cd3-441b-b23f-905b22e5dfe0 0xc00283b257 0xc00283b258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-85pbn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-85pbn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-85pbn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00283b2d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc00283b2f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:04:42 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:04:45 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:04:45 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:04:42 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.48,StartTime:2020-04-03 13:04:42 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-03 13:04:45 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://57865bf6b225c5b39b5701af1c52a2b2b4a6efdd25566e200171284f22784d55}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 13:04:47.908: INFO: Pod "test-cleanup-deployment-55bbcbc84c-sgpw2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-sgpw2,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-5321,SelfLink:/api/v1/namespaces/deployment-5321/pods/test-cleanup-deployment-55bbcbc84c-sgpw2,UID:56cf05a9-5d8a-46f9-a8cb-fbddd1f90e57,ResourceVersion:3391385,Generation:0,CreationTimestamp:2020-04-03 13:04:47 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c c7c790a2-f652-4b76-b30d-c9b2cce15670 0xc00283b3d7 0xc00283b3d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-85pbn {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-85pbn,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-85pbn true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00283b460} {node.kubernetes.io/unreachable Exists NoExecute 0xc00283b490}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:04:47 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:04:47.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5321" for this suite. Apr 3 13:04:53.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:04:54.054: INFO: namespace deployment-5321 deletion completed in 6.120198189s • [SLOW TEST:11.314 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:04:54.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Apr 3 13:04:54.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6401' Apr 3 13:04:54.398: INFO: stderr: "" Apr 3 13:04:54.398: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 3 13:04:54.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6401' Apr 3 13:04:54.512: INFO: stderr: "" Apr 3 13:04:54.512: INFO: stdout: "update-demo-nautilus-dlxlm update-demo-nautilus-j6bhg " Apr 3 13:04:54.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dlxlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6401' Apr 3 13:04:54.609: INFO: stderr: "" Apr 3 13:04:54.609: INFO: stdout: "" Apr 3 13:04:54.609: INFO: update-demo-nautilus-dlxlm is created but not running Apr 3 13:04:59.610: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6401' Apr 3 13:04:59.707: INFO: stderr: "" Apr 3 13:04:59.707: INFO: stdout: "update-demo-nautilus-dlxlm update-demo-nautilus-j6bhg " Apr 3 13:04:59.707: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dlxlm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6401' Apr 3 13:04:59.801: INFO: stderr: "" Apr 3 13:04:59.801: INFO: stdout: "true" Apr 3 13:04:59.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-dlxlm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6401' Apr 3 13:04:59.892: INFO: stderr: "" Apr 3 13:04:59.892: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 13:04:59.893: INFO: validating pod update-demo-nautilus-dlxlm Apr 3 13:04:59.897: INFO: got data: { "image": "nautilus.jpg" } Apr 3 13:04:59.897: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 13:04:59.897: INFO: update-demo-nautilus-dlxlm is verified up and running Apr 3 13:04:59.897: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j6bhg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6401' Apr 3 13:04:59.992: INFO: stderr: "" Apr 3 13:04:59.992: INFO: stdout: "true" Apr 3 13:04:59.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j6bhg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6401' Apr 3 13:05:00.085: INFO: stderr: "" Apr 3 13:05:00.085: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 13:05:00.085: INFO: validating pod update-demo-nautilus-j6bhg Apr 3 13:05:00.089: INFO: got data: { "image": "nautilus.jpg" } Apr 3 13:05:00.089: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 13:05:00.089: INFO: update-demo-nautilus-j6bhg is verified up and running STEP: scaling down the replication controller Apr 3 13:05:00.091: INFO: scanned /root for discovery docs: Apr 3 13:05:00.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6401' Apr 3 13:05:01.229: INFO: stderr: "" Apr 3 13:05:01.229: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 3 13:05:01.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6401' Apr 3 13:05:01.318: INFO: stderr: "" Apr 3 13:05:01.318: INFO: stdout: "update-demo-nautilus-dlxlm update-demo-nautilus-j6bhg " STEP: Replicas for name=update-demo: expected=1 actual=2 Apr 3 13:05:06.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6401' Apr 3 13:05:06.408: INFO: stderr: "" Apr 3 13:05:06.408: INFO: stdout: "update-demo-nautilus-j6bhg " Apr 3 13:05:06.408: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j6bhg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6401' Apr 3 13:05:06.499: INFO: stderr: "" Apr 3 13:05:06.499: INFO: stdout: "true" Apr 3 13:05:06.499: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j6bhg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6401' Apr 3 13:05:06.596: INFO: stderr: "" Apr 3 13:05:06.596: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 13:05:06.596: INFO: validating pod update-demo-nautilus-j6bhg Apr 3 13:05:06.599: INFO: got data: { "image": "nautilus.jpg" } Apr 3 13:05:06.599: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 13:05:06.599: INFO: update-demo-nautilus-j6bhg is verified up and running STEP: scaling up the replication controller Apr 3 13:05:06.601: INFO: scanned /root for discovery docs: Apr 3 13:05:06.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6401' Apr 3 13:05:07.726: INFO: stderr: "" Apr 3 13:05:07.726: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 3 13:05:07.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6401' Apr 3 13:05:07.826: INFO: stderr: "" Apr 3 13:05:07.826: INFO: stdout: "update-demo-nautilus-j6bhg update-demo-nautilus-tjvm7 " Apr 3 13:05:07.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j6bhg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6401' Apr 3 13:05:07.927: INFO: stderr: "" Apr 3 13:05:07.927: INFO: stdout: "true" Apr 3 13:05:07.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j6bhg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6401' Apr 3 13:05:08.024: INFO: stderr: "" Apr 3 13:05:08.024: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 13:05:08.024: INFO: validating pod update-demo-nautilus-j6bhg Apr 3 13:05:08.027: INFO: got data: { "image": "nautilus.jpg" } Apr 3 13:05:08.027: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 13:05:08.027: INFO: update-demo-nautilus-j6bhg is verified up and running Apr 3 13:05:08.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tjvm7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6401' Apr 3 13:05:08.164: INFO: stderr: "" Apr 3 13:05:08.164: INFO: stdout: "" Apr 3 13:05:08.164: INFO: update-demo-nautilus-tjvm7 is created but not running Apr 3 13:05:13.164: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6401' Apr 3 13:05:13.257: INFO: stderr: "" Apr 3 13:05:13.257: INFO: stdout: "update-demo-nautilus-j6bhg update-demo-nautilus-tjvm7 " Apr 3 13:05:13.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j6bhg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6401' Apr 3 13:05:13.357: INFO: stderr: "" Apr 3 13:05:13.357: INFO: stdout: "true" Apr 3 13:05:13.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j6bhg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6401' Apr 3 13:05:13.452: INFO: stderr: "" Apr 3 13:05:13.453: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 13:05:13.453: INFO: validating pod update-demo-nautilus-j6bhg Apr 3 13:05:13.456: INFO: got data: { "image": "nautilus.jpg" } Apr 3 13:05:13.456: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 13:05:13.456: INFO: update-demo-nautilus-j6bhg is verified up and running Apr 3 13:05:13.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tjvm7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6401' Apr 3 13:05:13.549: INFO: stderr: "" Apr 3 13:05:13.549: INFO: stdout: "true" Apr 3 13:05:13.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tjvm7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6401' Apr 3 13:05:13.642: INFO: stderr: "" Apr 3 13:05:13.642: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 13:05:13.642: INFO: validating pod update-demo-nautilus-tjvm7 Apr 3 13:05:13.646: INFO: got data: { "image": "nautilus.jpg" } Apr 3 13:05:13.646: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 13:05:13.646: INFO: update-demo-nautilus-tjvm7 is verified up and running STEP: using delete to clean up resources Apr 3 13:05:13.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6401' Apr 3 13:05:13.741: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 13:05:13.742: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Apr 3 13:05:13.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6401' Apr 3 13:05:13.847: INFO: stderr: "No resources found.\n" Apr 3 13:05:13.847: INFO: stdout: "" Apr 3 13:05:13.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6401 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 3 13:05:13.954: INFO: stderr: "" Apr 3 13:05:13.954: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:05:13.954: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6401" for this suite. Apr 3 13:05:35.993: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:05:36.084: INFO: namespace kubectl-6401 deletion completed in 22.100828626s • [SLOW TEST:42.030 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:05:36.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 13:05:36.160: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ac3ef64-a788-4c00-aae4-3a7f5fb7e6cd" in namespace "projected-2902" to be "success or failure" Apr 3 13:05:36.166: INFO: Pod "downwardapi-volume-6ac3ef64-a788-4c00-aae4-3a7f5fb7e6cd": Phase="Pending", Reason="", readiness=false. Elapsed: 5.408913ms Apr 3 13:05:38.170: INFO: Pod "downwardapi-volume-6ac3ef64-a788-4c00-aae4-3a7f5fb7e6cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009728197s Apr 3 13:05:40.174: INFO: Pod "downwardapi-volume-6ac3ef64-a788-4c00-aae4-3a7f5fb7e6cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01391453s STEP: Saw pod success Apr 3 13:05:40.174: INFO: Pod "downwardapi-volume-6ac3ef64-a788-4c00-aae4-3a7f5fb7e6cd" satisfied condition "success or failure" Apr 3 13:05:40.177: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-6ac3ef64-a788-4c00-aae4-3a7f5fb7e6cd container client-container: STEP: delete the pod Apr 3 13:05:40.208: INFO: Waiting for pod downwardapi-volume-6ac3ef64-a788-4c00-aae4-3a7f5fb7e6cd to disappear Apr 3 13:05:40.219: INFO: Pod downwardapi-volume-6ac3ef64-a788-4c00-aae4-3a7f5fb7e6cd no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:05:40.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2902" for this suite. Apr 3 13:05:46.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:05:46.310: INFO: namespace projected-2902 deletion completed in 6.087174782s • [SLOW TEST:10.226 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:05:46.311: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-5484 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 3 13:05:46.368: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 3 13:06:10.515: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.51:8080/dial?request=hostName&protocol=udp&host=10.244.1.242&port=8081&tries=1'] Namespace:pod-network-test-5484 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 13:06:10.515: INFO: >>> kubeConfig: /root/.kube/config I0403 13:06:10.556878 6 log.go:172] (0xc000eeafd0) (0xc0012c4f00) Create stream I0403 13:06:10.556918 6 log.go:172] (0xc000eeafd0) (0xc0012c4f00) Stream added, broadcasting: 1 I0403 13:06:10.564687 6 log.go:172] (0xc000eeafd0) Reply frame received for 1 I0403 13:06:10.564733 6 log.go:172] (0xc000eeafd0) (0xc0005748c0) Create stream I0403 13:06:10.564747 6 log.go:172] (0xc000eeafd0) (0xc0005748c0) Stream added, broadcasting: 3 I0403 13:06:10.566054 6 log.go:172] (0xc000eeafd0) Reply frame received for 3 I0403 13:06:10.566092 6 log.go:172] (0xc000eeafd0) (0xc0012c50e0) Create stream I0403 13:06:10.566107 6 log.go:172] (0xc000eeafd0) (0xc0012c50e0) Stream added, broadcasting: 5 I0403 13:06:10.566980 6 log.go:172] (0xc000eeafd0) Reply frame received for 5 I0403 13:06:10.657395 6 log.go:172] (0xc000eeafd0) Data frame received for 3 I0403 13:06:10.657490 6 log.go:172] (0xc0005748c0) (3) Data frame handling I0403 13:06:10.657523 6 log.go:172] (0xc0005748c0) (3) Data frame sent I0403 13:06:10.657766 6 log.go:172] (0xc000eeafd0) Data frame received for 3 I0403 13:06:10.657797 6 log.go:172] (0xc0005748c0) (3) Data frame handling I0403 13:06:10.657914 6 log.go:172] (0xc000eeafd0) Data frame received for 5 I0403 13:06:10.657932 6 log.go:172] (0xc0012c50e0) (5) Data frame handling I0403 13:06:10.659943 6 log.go:172] (0xc000eeafd0) Data frame received for 1 I0403 13:06:10.659981 6 log.go:172] (0xc0012c4f00) (1) Data frame handling I0403 13:06:10.660009 6 log.go:172] (0xc0012c4f00) (1) Data frame sent I0403 13:06:10.660034 6 log.go:172] (0xc000eeafd0) (0xc0012c4f00) Stream removed, broadcasting: 1 I0403 13:06:10.660053 6 log.go:172] (0xc000eeafd0) Go away received I0403 13:06:10.660404 6 log.go:172] (0xc000eeafd0) (0xc0012c4f00) Stream removed, broadcasting: 1 I0403 13:06:10.660421 6 log.go:172] (0xc000eeafd0) (0xc0005748c0) Stream removed, broadcasting: 3 I0403 13:06:10.660429 6 log.go:172] (0xc000eeafd0) (0xc0012c50e0) Stream removed, broadcasting: 5 Apr 3 13:06:10.660: INFO: Waiting for endpoints: map[] Apr 3 13:06:10.663: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.51:8080/dial?request=hostName&protocol=udp&host=10.244.2.50&port=8081&tries=1'] Namespace:pod-network-test-5484 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 13:06:10.663: INFO: >>> kubeConfig: /root/.kube/config I0403 13:06:10.700625 6 log.go:172] (0xc001a6cfd0) (0xc001118fa0) Create stream I0403 13:06:10.700652 6 log.go:172] (0xc001a6cfd0) (0xc001118fa0) Stream added, broadcasting: 1 I0403 13:06:10.703005 6 log.go:172] (0xc001a6cfd0) Reply frame received for 1 I0403 13:06:10.703047 6 log.go:172] (0xc001a6cfd0) (0xc000574f00) Create stream I0403 13:06:10.703058 6 log.go:172] (0xc001a6cfd0) (0xc000574f00) Stream added, broadcasting: 3 I0403 13:06:10.704171 6 log.go:172] (0xc001a6cfd0) Reply frame received for 3 I0403 13:06:10.704223 6 log.go:172] (0xc001a6cfd0) (0xc0025fc8c0) Create stream I0403 13:06:10.704239 6 log.go:172] (0xc001a6cfd0) (0xc0025fc8c0) Stream added, broadcasting: 5 I0403 13:06:10.705490 6 log.go:172] (0xc001a6cfd0) Reply frame received for 5 I0403 13:06:10.783395 6 log.go:172] (0xc001a6cfd0) Data frame received for 3 I0403 13:06:10.783419 6 log.go:172] (0xc000574f00) (3) Data frame handling I0403 13:06:10.783446 6 log.go:172] (0xc000574f00) (3) Data frame sent I0403 13:06:10.784095 6 log.go:172] (0xc001a6cfd0) Data frame received for 5 I0403 13:06:10.784124 6 log.go:172] (0xc0025fc8c0) (5) Data frame handling I0403 13:06:10.784197 6 log.go:172] (0xc001a6cfd0) Data frame received for 3 I0403 13:06:10.784224 6 log.go:172] (0xc000574f00) (3) Data frame handling I0403 13:06:10.785795 6 log.go:172] (0xc001a6cfd0) Data frame received for 1 I0403 13:06:10.785815 6 log.go:172] (0xc001118fa0) (1) Data frame handling I0403 13:06:10.785821 6 log.go:172] (0xc001118fa0) (1) Data frame sent I0403 13:06:10.785962 6 log.go:172] (0xc001a6cfd0) (0xc001118fa0) Stream removed, broadcasting: 1 I0403 13:06:10.786013 6 log.go:172] (0xc001a6cfd0) Go away received I0403 13:06:10.786182 6 log.go:172] (0xc001a6cfd0) (0xc001118fa0) Stream removed, broadcasting: 1 I0403 13:06:10.786210 6 log.go:172] (0xc001a6cfd0) (0xc000574f00) Stream removed, broadcasting: 3 I0403 13:06:10.786226 6 log.go:172] (0xc001a6cfd0) (0xc0025fc8c0) Stream removed, broadcasting: 5 Apr 3 13:06:10.786: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:06:10.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5484" for this suite. Apr 3 13:06:34.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:06:34.883: INFO: namespace pod-network-test-5484 deletion completed in 24.093466242s • [SLOW TEST:48.572 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:06:34.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-74vl STEP: Creating a pod to test atomic-volume-subpath Apr 3 13:06:34.955: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-74vl" in namespace "subpath-8254" to be "success or failure" Apr 3 13:06:34.992: INFO: Pod "pod-subpath-test-configmap-74vl": Phase="Pending", Reason="", readiness=false. Elapsed: 36.384055ms Apr 3 13:06:36.995: INFO: Pod "pod-subpath-test-configmap-74vl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039854419s Apr 3 13:06:39.000: INFO: Pod "pod-subpath-test-configmap-74vl": Phase="Running", Reason="", readiness=true. Elapsed: 4.044493149s Apr 3 13:06:41.005: INFO: Pod "pod-subpath-test-configmap-74vl": Phase="Running", Reason="", readiness=true. Elapsed: 6.049485481s Apr 3 13:06:43.009: INFO: Pod "pod-subpath-test-configmap-74vl": Phase="Running", Reason="", readiness=true. Elapsed: 8.053802507s Apr 3 13:06:45.014: INFO: Pod "pod-subpath-test-configmap-74vl": Phase="Running", Reason="", readiness=true. Elapsed: 10.058479617s Apr 3 13:06:47.018: INFO: Pod "pod-subpath-test-configmap-74vl": Phase="Running", Reason="", readiness=true. Elapsed: 12.06290052s Apr 3 13:06:49.022: INFO: Pod "pod-subpath-test-configmap-74vl": Phase="Running", Reason="", readiness=true. Elapsed: 14.067025419s Apr 3 13:06:51.027: INFO: Pod "pod-subpath-test-configmap-74vl": Phase="Running", Reason="", readiness=true. Elapsed: 16.071396608s Apr 3 13:06:53.031: INFO: Pod "pod-subpath-test-configmap-74vl": Phase="Running", Reason="", readiness=true. Elapsed: 18.075687643s Apr 3 13:06:55.035: INFO: Pod "pod-subpath-test-configmap-74vl": Phase="Running", Reason="", readiness=true. Elapsed: 20.079624326s Apr 3 13:06:57.039: INFO: Pod "pod-subpath-test-configmap-74vl": Phase="Running", Reason="", readiness=true. Elapsed: 22.084099858s Apr 3 13:06:59.044: INFO: Pod "pod-subpath-test-configmap-74vl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.088593334s STEP: Saw pod success Apr 3 13:06:59.044: INFO: Pod "pod-subpath-test-configmap-74vl" satisfied condition "success or failure" Apr 3 13:06:59.047: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-74vl container test-container-subpath-configmap-74vl: STEP: delete the pod Apr 3 13:06:59.101: INFO: Waiting for pod pod-subpath-test-configmap-74vl to disappear Apr 3 13:06:59.110: INFO: Pod pod-subpath-test-configmap-74vl no longer exists STEP: Deleting pod pod-subpath-test-configmap-74vl Apr 3 13:06:59.110: INFO: Deleting pod "pod-subpath-test-configmap-74vl" in namespace "subpath-8254" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:06:59.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8254" for this suite. Apr 3 13:07:05.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:07:05.217: INFO: namespace subpath-8254 deletion completed in 6.101092196s • [SLOW TEST:30.334 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:07:05.218: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 3 13:07:05.300: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:05.315: INFO: Number of nodes with available pods: 0 Apr 3 13:07:05.315: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:06.319: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:06.322: INFO: Number of nodes with available pods: 0 Apr 3 13:07:06.322: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:07.320: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:07.342: INFO: Number of nodes with available pods: 0 Apr 3 13:07:07.342: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:08.319: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:08.322: INFO: Number of nodes with available pods: 0 Apr 3 13:07:08.322: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:09.320: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:09.323: INFO: Number of nodes with available pods: 2 Apr 3 13:07:09.323: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Apr 3 13:07:09.343: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:09.345: INFO: Number of nodes with available pods: 1 Apr 3 13:07:09.345: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:10.349: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:10.352: INFO: Number of nodes with available pods: 1 Apr 3 13:07:10.352: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:11.360: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:11.364: INFO: Number of nodes with available pods: 1 Apr 3 13:07:11.364: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:12.350: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:12.353: INFO: Number of nodes with available pods: 1 Apr 3 13:07:12.353: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:13.350: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:13.359: INFO: Number of nodes with available pods: 1 Apr 3 13:07:13.359: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:14.349: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:14.352: INFO: Number of nodes with available pods: 1 Apr 3 13:07:14.352: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:15.351: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:15.354: INFO: Number of nodes with available pods: 1 Apr 3 13:07:15.354: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:16.350: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:16.353: INFO: Number of nodes with available pods: 1 Apr 3 13:07:16.353: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:17.350: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:17.353: INFO: Number of nodes with available pods: 1 Apr 3 13:07:17.353: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:18.349: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:18.352: INFO: Number of nodes with available pods: 1 Apr 3 13:07:18.352: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:19.350: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:19.354: INFO: Number of nodes with available pods: 1 Apr 3 13:07:19.354: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:20.349: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:20.352: INFO: Number of nodes with available pods: 1 Apr 3 13:07:20.352: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:21.350: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:21.354: INFO: Number of nodes with available pods: 1 Apr 3 13:07:21.354: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:22.350: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:22.354: INFO: Number of nodes with available pods: 1 Apr 3 13:07:22.354: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:23.350: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:23.353: INFO: Number of nodes with available pods: 1 Apr 3 13:07:23.353: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:24.349: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:24.351: INFO: Number of nodes with available pods: 1 Apr 3 13:07:24.351: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:07:25.352: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:07:25.356: INFO: Number of nodes with available pods: 2 Apr 3 13:07:25.356: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2407, will wait for the garbage collector to delete the pods Apr 3 13:07:25.417: INFO: Deleting DaemonSet.extensions daemon-set took: 7.003454ms Apr 3 13:07:25.718: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.344863ms Apr 3 13:07:32.221: INFO: Number of nodes with available pods: 0 Apr 3 13:07:32.221: INFO: Number of running nodes: 0, number of available pods: 0 Apr 3 13:07:32.229: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2407/daemonsets","resourceVersion":"3391990"},"items":null} Apr 3 13:07:32.231: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2407/pods","resourceVersion":"3391990"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:07:32.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2407" for this suite. Apr 3 13:07:38.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:07:38.336: INFO: namespace daemonsets-2407 deletion completed in 6.095549351s • [SLOW TEST:33.119 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:07:38.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod Apr 3 13:07:42.403: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-509a1295-9841-4f9d-af27-cac656c3345a,GenerateName:,Namespace:events-4441,SelfLink:/api/v1/namespaces/events-4441/pods/send-events-509a1295-9841-4f9d-af27-cac656c3345a,UID:986c3e76-61fa-4845-ae78-ee2eee623db3,ResourceVersion:3392043,Generation:0,CreationTimestamp:2020-04-03 13:07:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 374957424,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-s7lw4 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-s7lw4,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-s7lw4 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0028d2fb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0028d2fd0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:07:38 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:07:40 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:07:40 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:07:38 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.244,StartTime:2020-04-03 13:07:38 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-04-03 13:07:40 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://d2e67e18a33cc22b8c33665628a6114dfa648386783498e3cea24e89a3012446}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod Apr 3 13:07:44.408: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod Apr 3 13:07:46.412: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:07:46.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-4441" for this suite. Apr 3 13:08:24.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:08:24.561: INFO: namespace events-4441 deletion completed in 38.134255017s • [SLOW TEST:46.224 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:08:24.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 3 13:08:24.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-5147' Apr 3 13:08:24.705: INFO: stderr: "" Apr 3 13:08:24.705: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Apr 3 13:08:24.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-5147' Apr 3 13:08:32.181: INFO: stderr: "" Apr 3 13:08:32.181: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:08:32.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5147" for this suite. Apr 3 13:08:38.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:08:38.311: INFO: namespace kubectl-5147 deletion completed in 6.099639944s • [SLOW TEST:13.750 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:08:38.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 3 13:08:38.364: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 3 13:08:38.372: INFO: Waiting for terminating namespaces to be deleted... Apr 3 13:08:38.375: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 3 13:08:38.379: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 3 13:08:38.379: INFO: Container kube-proxy ready: true, restart count 0 Apr 3 13:08:38.379: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 3 13:08:38.379: INFO: Container kindnet-cni ready: true, restart count 0 Apr 3 13:08:38.379: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 3 13:08:38.406: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 3 13:08:38.406: INFO: Container kindnet-cni ready: true, restart count 0 Apr 3 13:08:38.406: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 3 13:08:38.406: INFO: Container kube-proxy ready: true, restart count 0 Apr 3 13:08:38.406: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 3 13:08:38.406: INFO: Container coredns ready: true, restart count 0 Apr 3 13:08:38.406: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 3 13:08:38.406: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16025137b0f0b533], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:08:39.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3778" for this suite. Apr 3 13:08:45.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:08:45.539: INFO: namespace sched-pred-3778 deletion completed in 6.109302874s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.227 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:08:45.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 3 13:08:45.574: INFO: Waiting up to 5m0s for pod "downward-api-6798f871-c033-431c-bd0d-3280738eafa0" in namespace "downward-api-5077" to be "success or failure" Apr 3 13:08:45.625: INFO: Pod "downward-api-6798f871-c033-431c-bd0d-3280738eafa0": Phase="Pending", Reason="", readiness=false. Elapsed: 51.647474ms Apr 3 13:08:47.679: INFO: Pod "downward-api-6798f871-c033-431c-bd0d-3280738eafa0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105726971s Apr 3 13:08:49.684: INFO: Pod "downward-api-6798f871-c033-431c-bd0d-3280738eafa0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.110119785s STEP: Saw pod success Apr 3 13:08:49.684: INFO: Pod "downward-api-6798f871-c033-431c-bd0d-3280738eafa0" satisfied condition "success or failure" Apr 3 13:08:49.687: INFO: Trying to get logs from node iruya-worker2 pod downward-api-6798f871-c033-431c-bd0d-3280738eafa0 container dapi-container: STEP: delete the pod Apr 3 13:08:49.710: INFO: Waiting for pod downward-api-6798f871-c033-431c-bd0d-3280738eafa0 to disappear Apr 3 13:08:49.714: INFO: Pod downward-api-6798f871-c033-431c-bd0d-3280738eafa0 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:08:49.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5077" for this suite. Apr 3 13:08:55.745: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:08:55.847: INFO: namespace downward-api-5077 deletion completed in 6.129215342s • [SLOW TEST:10.308 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:08:55.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 13:08:55.930: INFO: Create a RollingUpdate DaemonSet Apr 3 13:08:55.934: INFO: Check that daemon pods launch on every node of the cluster Apr 3 13:08:55.943: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:08:55.972: INFO: Number of nodes with available pods: 0 Apr 3 13:08:55.972: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:08:56.977: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:08:56.980: INFO: Number of nodes with available pods: 0 Apr 3 13:08:56.980: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:08:58.007: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:08:58.010: INFO: Number of nodes with available pods: 0 Apr 3 13:08:58.010: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:08:58.981: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:08:58.984: INFO: Number of nodes with available pods: 1 Apr 3 13:08:58.984: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:08:59.977: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:08:59.981: INFO: Number of nodes with available pods: 2 Apr 3 13:08:59.981: INFO: Number of running nodes: 2, number of available pods: 2 Apr 3 13:08:59.981: INFO: Update the DaemonSet to trigger a rollout Apr 3 13:08:59.988: INFO: Updating DaemonSet daemon-set Apr 3 13:09:13.017: INFO: Roll back the DaemonSet before rollout is complete Apr 3 13:09:13.023: INFO: Updating DaemonSet daemon-set Apr 3 13:09:13.023: INFO: Make sure DaemonSet rollback is complete Apr 3 13:09:13.028: INFO: Wrong image for pod: daemon-set-4ppcd. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 3 13:09:13.028: INFO: Pod daemon-set-4ppcd is not available Apr 3 13:09:13.035: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:09:14.039: INFO: Wrong image for pod: daemon-set-4ppcd. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 3 13:09:14.039: INFO: Pod daemon-set-4ppcd is not available Apr 3 13:09:14.042: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:09:15.042: INFO: Wrong image for pod: daemon-set-4ppcd. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Apr 3 13:09:15.042: INFO: Pod daemon-set-4ppcd is not available Apr 3 13:09:15.046: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:09:16.040: INFO: Pod daemon-set-885dx is not available Apr 3 13:09:16.043: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6086, will wait for the garbage collector to delete the pods Apr 3 13:09:16.108: INFO: Deleting DaemonSet.extensions daemon-set took: 5.673744ms Apr 3 13:09:16.408: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.334777ms Apr 3 13:09:21.912: INFO: Number of nodes with available pods: 0 Apr 3 13:09:21.912: INFO: Number of running nodes: 0, number of available pods: 0 Apr 3 13:09:21.915: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6086/daemonsets","resourceVersion":"3392391"},"items":null} Apr 3 13:09:21.917: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6086/pods","resourceVersion":"3392391"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:09:21.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6086" for this suite. Apr 3 13:09:27.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:09:28.046: INFO: namespace daemonsets-6086 deletion completed in 6.117257061s • [SLOW TEST:32.199 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:09:28.047: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Apr 3 13:09:38.162: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8131 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 13:09:38.162: INFO: >>> kubeConfig: /root/.kube/config I0403 13:09:38.199501 6 log.go:172] (0xc002d234a0) (0xc001d470e0) Create stream I0403 13:09:38.199535 6 log.go:172] (0xc002d234a0) (0xc001d470e0) Stream added, broadcasting: 1 I0403 13:09:38.202069 6 log.go:172] (0xc002d234a0) Reply frame received for 1 I0403 13:09:38.202118 6 log.go:172] (0xc002d234a0) (0xc001d47180) Create stream I0403 13:09:38.202134 6 log.go:172] (0xc002d234a0) (0xc001d47180) Stream added, broadcasting: 3 I0403 13:09:38.203170 6 log.go:172] (0xc002d234a0) Reply frame received for 3 I0403 13:09:38.203204 6 log.go:172] (0xc002d234a0) (0xc001d47220) Create stream I0403 13:09:38.203215 6 log.go:172] (0xc002d234a0) (0xc001d47220) Stream added, broadcasting: 5 I0403 13:09:38.204335 6 log.go:172] (0xc002d234a0) Reply frame received for 5 I0403 13:09:38.257576 6 log.go:172] (0xc002d234a0) Data frame received for 3 I0403 13:09:38.257611 6 log.go:172] (0xc001d47180) (3) Data frame handling I0403 13:09:38.257629 6 log.go:172] (0xc001d47180) (3) Data frame sent I0403 13:09:38.257639 6 log.go:172] (0xc002d234a0) Data frame received for 3 I0403 13:09:38.257655 6 log.go:172] (0xc001d47180) (3) Data frame handling I0403 13:09:38.258920 6 log.go:172] (0xc002d234a0) Data frame received for 5 I0403 13:09:38.258975 6 log.go:172] (0xc001d47220) (5) Data frame handling I0403 13:09:38.265597 6 log.go:172] (0xc002d234a0) Data frame received for 1 I0403 13:09:38.265619 6 log.go:172] (0xc001d470e0) (1) Data frame handling I0403 13:09:38.265630 6 log.go:172] (0xc001d470e0) (1) Data frame sent I0403 13:09:38.265966 6 log.go:172] (0xc002d234a0) (0xc001d470e0) Stream removed, broadcasting: 1 I0403 13:09:38.266046 6 log.go:172] (0xc002d234a0) (0xc001d470e0) Stream removed, broadcasting: 1 I0403 13:09:38.266056 6 log.go:172] (0xc002d234a0) (0xc001d47180) Stream removed, broadcasting: 3 I0403 13:09:38.266061 6 log.go:172] (0xc002d234a0) (0xc001d47220) Stream removed, broadcasting: 5 Apr 3 13:09:38.266: INFO: Exec stderr: "" Apr 3 13:09:38.266: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8131 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 13:09:38.266: INFO: >>> kubeConfig: /root/.kube/config I0403 13:09:38.267473 6 log.go:172] (0xc002d234a0) Go away received I0403 13:09:38.287803 6 log.go:172] (0xc000a15550) (0xc00121d4a0) Create stream I0403 13:09:38.287821 6 log.go:172] (0xc000a15550) (0xc00121d4a0) Stream added, broadcasting: 1 I0403 13:09:38.289810 6 log.go:172] (0xc000a15550) Reply frame received for 1 I0403 13:09:38.289857 6 log.go:172] (0xc000a15550) (0xc00121d680) Create stream I0403 13:09:38.289869 6 log.go:172] (0xc000a15550) (0xc00121d680) Stream added, broadcasting: 3 I0403 13:09:38.290790 6 log.go:172] (0xc000a15550) Reply frame received for 3 I0403 13:09:38.290825 6 log.go:172] (0xc000a15550) (0xc001119040) Create stream I0403 13:09:38.290836 6 log.go:172] (0xc000a15550) (0xc001119040) Stream added, broadcasting: 5 I0403 13:09:38.291627 6 log.go:172] (0xc000a15550) Reply frame received for 5 I0403 13:09:38.348266 6 log.go:172] (0xc000a15550) Data frame received for 5 I0403 13:09:38.348345 6 log.go:172] (0xc001119040) (5) Data frame handling I0403 13:09:38.348422 6 log.go:172] (0xc000a15550) Data frame received for 3 I0403 13:09:38.348477 6 log.go:172] (0xc00121d680) (3) Data frame handling I0403 13:09:38.348499 6 log.go:172] (0xc00121d680) (3) Data frame sent I0403 13:09:38.348512 6 log.go:172] (0xc000a15550) Data frame received for 3 I0403 13:09:38.348523 6 log.go:172] (0xc00121d680) (3) Data frame handling I0403 13:09:38.350090 6 log.go:172] (0xc000a15550) Data frame received for 1 I0403 13:09:38.350116 6 log.go:172] (0xc00121d4a0) (1) Data frame handling I0403 13:09:38.350128 6 log.go:172] (0xc00121d4a0) (1) Data frame sent I0403 13:09:38.350151 6 log.go:172] (0xc000a15550) (0xc00121d4a0) Stream removed, broadcasting: 1 I0403 13:09:38.350172 6 log.go:172] (0xc000a15550) Go away received I0403 13:09:38.350230 6 log.go:172] (0xc000a15550) (0xc00121d4a0) Stream removed, broadcasting: 1 I0403 13:09:38.350245 6 log.go:172] (0xc000a15550) (0xc00121d680) Stream removed, broadcasting: 3 I0403 13:09:38.350254 6 log.go:172] (0xc000a15550) (0xc001119040) Stream removed, broadcasting: 5 Apr 3 13:09:38.350: INFO: Exec stderr: "" Apr 3 13:09:38.350: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8131 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 13:09:38.350: INFO: >>> kubeConfig: /root/.kube/config I0403 13:09:38.396173 6 log.go:172] (0xc002b70c60) (0xc0012568c0) Create stream I0403 13:09:38.396213 6 log.go:172] (0xc002b70c60) (0xc0012568c0) Stream added, broadcasting: 1 I0403 13:09:38.398486 6 log.go:172] (0xc002b70c60) Reply frame received for 1 I0403 13:09:38.398539 6 log.go:172] (0xc002b70c60) (0xc001256a00) Create stream I0403 13:09:38.398553 6 log.go:172] (0xc002b70c60) (0xc001256a00) Stream added, broadcasting: 3 I0403 13:09:38.399379 6 log.go:172] (0xc002b70c60) Reply frame received for 3 I0403 13:09:38.399411 6 log.go:172] (0xc002b70c60) (0xc001d472c0) Create stream I0403 13:09:38.399421 6 log.go:172] (0xc002b70c60) (0xc001d472c0) Stream added, broadcasting: 5 I0403 13:09:38.400078 6 log.go:172] (0xc002b70c60) Reply frame received for 5 I0403 13:09:38.454081 6 log.go:172] (0xc002b70c60) Data frame received for 5 I0403 13:09:38.454139 6 log.go:172] (0xc001d472c0) (5) Data frame handling I0403 13:09:38.454183 6 log.go:172] (0xc002b70c60) Data frame received for 3 I0403 13:09:38.454205 6 log.go:172] (0xc001256a00) (3) Data frame handling I0403 13:09:38.454236 6 log.go:172] (0xc001256a00) (3) Data frame sent I0403 13:09:38.454334 6 log.go:172] (0xc002b70c60) Data frame received for 3 I0403 13:09:38.454370 6 log.go:172] (0xc001256a00) (3) Data frame handling I0403 13:09:38.456002 6 log.go:172] (0xc002b70c60) Data frame received for 1 I0403 13:09:38.456027 6 log.go:172] (0xc0012568c0) (1) Data frame handling I0403 13:09:38.456052 6 log.go:172] (0xc0012568c0) (1) Data frame sent I0403 13:09:38.456068 6 log.go:172] (0xc002b70c60) (0xc0012568c0) Stream removed, broadcasting: 1 I0403 13:09:38.456096 6 log.go:172] (0xc002b70c60) Go away received I0403 13:09:38.456255 6 log.go:172] (0xc002b70c60) (0xc0012568c0) Stream removed, broadcasting: 1 I0403 13:09:38.456288 6 log.go:172] (0xc002b70c60) (0xc001256a00) Stream removed, broadcasting: 3 I0403 13:09:38.456308 6 log.go:172] (0xc002b70c60) (0xc001d472c0) Stream removed, broadcasting: 5 Apr 3 13:09:38.456: INFO: Exec stderr: "" Apr 3 13:09:38.456: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8131 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 13:09:38.456: INFO: >>> kubeConfig: /root/.kube/config I0403 13:09:38.491598 6 log.go:172] (0xc002e68630) (0xc001d47720) Create stream I0403 13:09:38.491638 6 log.go:172] (0xc002e68630) (0xc001d47720) Stream added, broadcasting: 1 I0403 13:09:38.494134 6 log.go:172] (0xc002e68630) Reply frame received for 1 I0403 13:09:38.494182 6 log.go:172] (0xc002e68630) (0xc001256be0) Create stream I0403 13:09:38.494196 6 log.go:172] (0xc002e68630) (0xc001256be0) Stream added, broadcasting: 3 I0403 13:09:38.495211 6 log.go:172] (0xc002e68630) Reply frame received for 3 I0403 13:09:38.495260 6 log.go:172] (0xc002e68630) (0xc001119220) Create stream I0403 13:09:38.495276 6 log.go:172] (0xc002e68630) (0xc001119220) Stream added, broadcasting: 5 I0403 13:09:38.496147 6 log.go:172] (0xc002e68630) Reply frame received for 5 I0403 13:09:38.560750 6 log.go:172] (0xc002e68630) Data frame received for 3 I0403 13:09:38.560788 6 log.go:172] (0xc001256be0) (3) Data frame handling I0403 13:09:38.560807 6 log.go:172] (0xc001256be0) (3) Data frame sent I0403 13:09:38.560819 6 log.go:172] (0xc002e68630) Data frame received for 3 I0403 13:09:38.560829 6 log.go:172] (0xc001256be0) (3) Data frame handling I0403 13:09:38.561426 6 log.go:172] (0xc002e68630) Data frame received for 5 I0403 13:09:38.561460 6 log.go:172] (0xc001119220) (5) Data frame handling I0403 13:09:38.562834 6 log.go:172] (0xc002e68630) Data frame received for 1 I0403 13:09:38.562868 6 log.go:172] (0xc001d47720) (1) Data frame handling I0403 13:09:38.562891 6 log.go:172] (0xc001d47720) (1) Data frame sent I0403 13:09:38.562932 6 log.go:172] (0xc002e68630) (0xc001d47720) Stream removed, broadcasting: 1 I0403 13:09:38.562965 6 log.go:172] (0xc002e68630) Go away received I0403 13:09:38.563051 6 log.go:172] (0xc002e68630) (0xc001d47720) Stream removed, broadcasting: 1 I0403 13:09:38.563081 6 log.go:172] (0xc002e68630) (0xc001256be0) Stream removed, broadcasting: 3 I0403 13:09:38.563103 6 log.go:172] (0xc002e68630) (0xc001119220) Stream removed, broadcasting: 5 Apr 3 13:09:38.563: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Apr 3 13:09:38.563: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8131 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 13:09:38.563: INFO: >>> kubeConfig: /root/.kube/config I0403 13:09:38.600132 6 log.go:172] (0xc000c61e40) (0xc001119860) Create stream I0403 13:09:38.600162 6 log.go:172] (0xc000c61e40) (0xc001119860) Stream added, broadcasting: 1 I0403 13:09:38.602505 6 log.go:172] (0xc000c61e40) Reply frame received for 1 I0403 13:09:38.602541 6 log.go:172] (0xc000c61e40) (0xc00121d7c0) Create stream I0403 13:09:38.602560 6 log.go:172] (0xc000c61e40) (0xc00121d7c0) Stream added, broadcasting: 3 I0403 13:09:38.603684 6 log.go:172] (0xc000c61e40) Reply frame received for 3 I0403 13:09:38.603722 6 log.go:172] (0xc000c61e40) (0xc00121d860) Create stream I0403 13:09:38.603733 6 log.go:172] (0xc000c61e40) (0xc00121d860) Stream added, broadcasting: 5 I0403 13:09:38.604815 6 log.go:172] (0xc000c61e40) Reply frame received for 5 I0403 13:09:38.660573 6 log.go:172] (0xc000c61e40) Data frame received for 3 I0403 13:09:38.660629 6 log.go:172] (0xc00121d7c0) (3) Data frame handling I0403 13:09:38.660643 6 log.go:172] (0xc00121d7c0) (3) Data frame sent I0403 13:09:38.660658 6 log.go:172] (0xc000c61e40) Data frame received for 3 I0403 13:09:38.660670 6 log.go:172] (0xc00121d7c0) (3) Data frame handling I0403 13:09:38.660708 6 log.go:172] (0xc000c61e40) Data frame received for 5 I0403 13:09:38.660750 6 log.go:172] (0xc00121d860) (5) Data frame handling I0403 13:09:38.662091 6 log.go:172] (0xc000c61e40) Data frame received for 1 I0403 13:09:38.662123 6 log.go:172] (0xc001119860) (1) Data frame handling I0403 13:09:38.662156 6 log.go:172] (0xc001119860) (1) Data frame sent I0403 13:09:38.662312 6 log.go:172] (0xc000c61e40) (0xc001119860) Stream removed, broadcasting: 1 I0403 13:09:38.662342 6 log.go:172] (0xc000c61e40) Go away received I0403 13:09:38.662522 6 log.go:172] (0xc000c61e40) (0xc001119860) Stream removed, broadcasting: 1 I0403 13:09:38.662541 6 log.go:172] (0xc000c61e40) (0xc00121d7c0) Stream removed, broadcasting: 3 I0403 13:09:38.662550 6 log.go:172] (0xc000c61e40) (0xc00121d860) Stream removed, broadcasting: 5 Apr 3 13:09:38.662: INFO: Exec stderr: "" Apr 3 13:09:38.662: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8131 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 13:09:38.662: INFO: >>> kubeConfig: /root/.kube/config I0403 13:09:38.697847 6 log.go:172] (0xc002eecb00) (0xc001119f40) Create stream I0403 13:09:38.697890 6 log.go:172] (0xc002eecb00) (0xc001119f40) Stream added, broadcasting: 1 I0403 13:09:38.700604 6 log.go:172] (0xc002eecb00) Reply frame received for 1 I0403 13:09:38.700658 6 log.go:172] (0xc002eecb00) (0xc0024b0aa0) Create stream I0403 13:09:38.700675 6 log.go:172] (0xc002eecb00) (0xc0024b0aa0) Stream added, broadcasting: 3 I0403 13:09:38.701983 6 log.go:172] (0xc002eecb00) Reply frame received for 3 I0403 13:09:38.702052 6 log.go:172] (0xc002eecb00) (0xc001256c80) Create stream I0403 13:09:38.702083 6 log.go:172] (0xc002eecb00) (0xc001256c80) Stream added, broadcasting: 5 I0403 13:09:38.703250 6 log.go:172] (0xc002eecb00) Reply frame received for 5 I0403 13:09:38.755583 6 log.go:172] (0xc002eecb00) Data frame received for 5 I0403 13:09:38.755621 6 log.go:172] (0xc001256c80) (5) Data frame handling I0403 13:09:38.755673 6 log.go:172] (0xc002eecb00) Data frame received for 3 I0403 13:09:38.755712 6 log.go:172] (0xc0024b0aa0) (3) Data frame handling I0403 13:09:38.755750 6 log.go:172] (0xc0024b0aa0) (3) Data frame sent I0403 13:09:38.755778 6 log.go:172] (0xc002eecb00) Data frame received for 3 I0403 13:09:38.755799 6 log.go:172] (0xc0024b0aa0) (3) Data frame handling I0403 13:09:38.756802 6 log.go:172] (0xc002eecb00) Data frame received for 1 I0403 13:09:38.756831 6 log.go:172] (0xc001119f40) (1) Data frame handling I0403 13:09:38.756839 6 log.go:172] (0xc001119f40) (1) Data frame sent I0403 13:09:38.756856 6 log.go:172] (0xc002eecb00) (0xc001119f40) Stream removed, broadcasting: 1 I0403 13:09:38.756865 6 log.go:172] (0xc002eecb00) Go away received I0403 13:09:38.757001 6 log.go:172] (0xc002eecb00) (0xc001119f40) Stream removed, broadcasting: 1 I0403 13:09:38.757034 6 log.go:172] (0xc002eecb00) (0xc0024b0aa0) Stream removed, broadcasting: 3 I0403 13:09:38.757055 6 log.go:172] (0xc002eecb00) (0xc001256c80) Stream removed, broadcasting: 5 Apr 3 13:09:38.757: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Apr 3 13:09:38.757: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8131 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 13:09:38.757: INFO: >>> kubeConfig: /root/.kube/config I0403 13:09:38.790061 6 log.go:172] (0xc00328c6e0) (0xc00121dd60) Create stream I0403 13:09:38.790092 6 log.go:172] (0xc00328c6e0) (0xc00121dd60) Stream added, broadcasting: 1 I0403 13:09:38.798827 6 log.go:172] (0xc00328c6e0) Reply frame received for 1 I0403 13:09:38.798863 6 log.go:172] (0xc00328c6e0) (0xc001256000) Create stream I0403 13:09:38.798873 6 log.go:172] (0xc00328c6e0) (0xc001256000) Stream added, broadcasting: 3 I0403 13:09:38.799788 6 log.go:172] (0xc00328c6e0) Reply frame received for 3 I0403 13:09:38.799827 6 log.go:172] (0xc00328c6e0) (0xc0024b0000) Create stream I0403 13:09:38.799838 6 log.go:172] (0xc00328c6e0) (0xc0024b0000) Stream added, broadcasting: 5 I0403 13:09:38.800832 6 log.go:172] (0xc00328c6e0) Reply frame received for 5 I0403 13:09:38.873581 6 log.go:172] (0xc00328c6e0) Data frame received for 5 I0403 13:09:38.873620 6 log.go:172] (0xc0024b0000) (5) Data frame handling I0403 13:09:38.873656 6 log.go:172] (0xc00328c6e0) Data frame received for 3 I0403 13:09:38.873671 6 log.go:172] (0xc001256000) (3) Data frame handling I0403 13:09:38.873686 6 log.go:172] (0xc001256000) (3) Data frame sent I0403 13:09:38.873700 6 log.go:172] (0xc00328c6e0) Data frame received for 3 I0403 13:09:38.873709 6 log.go:172] (0xc001256000) (3) Data frame handling I0403 13:09:38.875260 6 log.go:172] (0xc00328c6e0) Data frame received for 1 I0403 13:09:38.875292 6 log.go:172] (0xc00121dd60) (1) Data frame handling I0403 13:09:38.875307 6 log.go:172] (0xc00121dd60) (1) Data frame sent I0403 13:09:38.875334 6 log.go:172] (0xc00328c6e0) (0xc00121dd60) Stream removed, broadcasting: 1 I0403 13:09:38.875440 6 log.go:172] (0xc00328c6e0) Go away received I0403 13:09:38.875489 6 log.go:172] (0xc00328c6e0) (0xc00121dd60) Stream removed, broadcasting: 1 I0403 13:09:38.875529 6 log.go:172] (0xc00328c6e0) (0xc001256000) Stream removed, broadcasting: 3 I0403 13:09:38.875547 6 log.go:172] (0xc00328c6e0) (0xc0024b0000) Stream removed, broadcasting: 5 Apr 3 13:09:38.875: INFO: Exec stderr: "" Apr 3 13:09:38.875: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8131 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 13:09:38.875: INFO: >>> kubeConfig: /root/.kube/config I0403 13:09:38.911002 6 log.go:172] (0xc002b388f0) (0xc0012563c0) Create stream I0403 13:09:38.911029 6 log.go:172] (0xc002b388f0) (0xc0012563c0) Stream added, broadcasting: 1 I0403 13:09:38.913460 6 log.go:172] (0xc002b388f0) Reply frame received for 1 I0403 13:09:38.913503 6 log.go:172] (0xc002b388f0) (0xc0005741e0) Create stream I0403 13:09:38.913517 6 log.go:172] (0xc002b388f0) (0xc0005741e0) Stream added, broadcasting: 3 I0403 13:09:38.914550 6 log.go:172] (0xc002b388f0) Reply frame received for 3 I0403 13:09:38.914610 6 log.go:172] (0xc002b388f0) (0xc0024b0140) Create stream I0403 13:09:38.914645 6 log.go:172] (0xc002b388f0) (0xc0024b0140) Stream added, broadcasting: 5 I0403 13:09:38.915616 6 log.go:172] (0xc002b388f0) Reply frame received for 5 I0403 13:09:38.992972 6 log.go:172] (0xc002b388f0) Data frame received for 3 I0403 13:09:38.992998 6 log.go:172] (0xc0005741e0) (3) Data frame handling I0403 13:09:38.993051 6 log.go:172] (0xc0005741e0) (3) Data frame sent I0403 13:09:38.993065 6 log.go:172] (0xc002b388f0) Data frame received for 5 I0403 13:09:38.993074 6 log.go:172] (0xc0024b0140) (5) Data frame handling I0403 13:09:38.993568 6 log.go:172] (0xc002b388f0) Data frame received for 3 I0403 13:09:38.993594 6 log.go:172] (0xc0005741e0) (3) Data frame handling I0403 13:09:38.994862 6 log.go:172] (0xc002b388f0) Data frame received for 1 I0403 13:09:38.994897 6 log.go:172] (0xc0012563c0) (1) Data frame handling I0403 13:09:38.994938 6 log.go:172] (0xc0012563c0) (1) Data frame sent I0403 13:09:38.994967 6 log.go:172] (0xc002b388f0) (0xc0012563c0) Stream removed, broadcasting: 1 I0403 13:09:38.995120 6 log.go:172] (0xc002b388f0) (0xc0012563c0) Stream removed, broadcasting: 1 I0403 13:09:38.995150 6 log.go:172] (0xc002b388f0) (0xc0005741e0) Stream removed, broadcasting: 3 I0403 13:09:38.995186 6 log.go:172] (0xc002b388f0) Go away received I0403 13:09:38.995349 6 log.go:172] (0xc002b388f0) (0xc0024b0140) Stream removed, broadcasting: 5 Apr 3 13:09:38.995: INFO: Exec stderr: "" Apr 3 13:09:38.995: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8131 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 13:09:38.995: INFO: >>> kubeConfig: /root/.kube/config I0403 13:09:39.024588 6 log.go:172] (0xc002b39600) (0xc001256a00) Create stream I0403 13:09:39.024606 6 log.go:172] (0xc002b39600) (0xc001256a00) Stream added, broadcasting: 1 I0403 13:09:39.027268 6 log.go:172] (0xc002b39600) Reply frame received for 1 I0403 13:09:39.027304 6 log.go:172] (0xc002b39600) (0xc0024b01e0) Create stream I0403 13:09:39.027317 6 log.go:172] (0xc002b39600) (0xc0024b01e0) Stream added, broadcasting: 3 I0403 13:09:39.028283 6 log.go:172] (0xc002b39600) Reply frame received for 3 I0403 13:09:39.028302 6 log.go:172] (0xc002b39600) (0xc0021121e0) Create stream I0403 13:09:39.028313 6 log.go:172] (0xc002b39600) (0xc0021121e0) Stream added, broadcasting: 5 I0403 13:09:39.029434 6 log.go:172] (0xc002b39600) Reply frame received for 5 I0403 13:09:39.090614 6 log.go:172] (0xc002b39600) Data frame received for 3 I0403 13:09:39.090650 6 log.go:172] (0xc0024b01e0) (3) Data frame handling I0403 13:09:39.090666 6 log.go:172] (0xc0024b01e0) (3) Data frame sent I0403 13:09:39.090674 6 log.go:172] (0xc002b39600) Data frame received for 3 I0403 13:09:39.090688 6 log.go:172] (0xc0024b01e0) (3) Data frame handling I0403 13:09:39.090702 6 log.go:172] (0xc002b39600) Data frame received for 5 I0403 13:09:39.090712 6 log.go:172] (0xc0021121e0) (5) Data frame handling I0403 13:09:39.092462 6 log.go:172] (0xc002b39600) Data frame received for 1 I0403 13:09:39.092507 6 log.go:172] (0xc001256a00) (1) Data frame handling I0403 13:09:39.092547 6 log.go:172] (0xc001256a00) (1) Data frame sent I0403 13:09:39.092583 6 log.go:172] (0xc002b39600) (0xc001256a00) Stream removed, broadcasting: 1 I0403 13:09:39.092648 6 log.go:172] (0xc002b39600) Go away received I0403 13:09:39.092708 6 log.go:172] (0xc002b39600) (0xc001256a00) Stream removed, broadcasting: 1 I0403 13:09:39.092729 6 log.go:172] (0xc002b39600) (0xc0024b01e0) Stream removed, broadcasting: 3 I0403 13:09:39.092744 6 log.go:172] (0xc002b39600) (0xc0021121e0) Stream removed, broadcasting: 5 Apr 3 13:09:39.092: INFO: Exec stderr: "" Apr 3 13:09:39.092: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8131 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 13:09:39.092: INFO: >>> kubeConfig: /root/.kube/config I0403 13:09:39.125616 6 log.go:172] (0xc00328cdc0) (0xc0024b05a0) Create stream I0403 13:09:39.125639 6 log.go:172] (0xc00328cdc0) (0xc0024b05a0) Stream added, broadcasting: 1 I0403 13:09:39.127971 6 log.go:172] (0xc00328cdc0) Reply frame received for 1 I0403 13:09:39.128017 6 log.go:172] (0xc00328cdc0) (0xc001256be0) Create stream I0403 13:09:39.128031 6 log.go:172] (0xc00328cdc0) (0xc001256be0) Stream added, broadcasting: 3 I0403 13:09:39.128939 6 log.go:172] (0xc00328cdc0) Reply frame received for 3 I0403 13:09:39.128994 6 log.go:172] (0xc00328cdc0) (0xc001256c80) Create stream I0403 13:09:39.129019 6 log.go:172] (0xc00328cdc0) (0xc001256c80) Stream added, broadcasting: 5 I0403 13:09:39.130155 6 log.go:172] (0xc00328cdc0) Reply frame received for 5 I0403 13:09:39.213444 6 log.go:172] (0xc00328cdc0) Data frame received for 5 I0403 13:09:39.213485 6 log.go:172] (0xc001256c80) (5) Data frame handling I0403 13:09:39.213614 6 log.go:172] (0xc00328cdc0) Data frame received for 3 I0403 13:09:39.213634 6 log.go:172] (0xc001256be0) (3) Data frame handling I0403 13:09:39.213648 6 log.go:172] (0xc001256be0) (3) Data frame sent I0403 13:09:39.213662 6 log.go:172] (0xc00328cdc0) Data frame received for 3 I0403 13:09:39.213675 6 log.go:172] (0xc001256be0) (3) Data frame handling I0403 13:09:39.213719 6 log.go:172] (0xc00328cdc0) Data frame received for 1 I0403 13:09:39.213734 6 log.go:172] (0xc0024b05a0) (1) Data frame handling I0403 13:09:39.213746 6 log.go:172] (0xc0024b05a0) (1) Data frame sent I0403 13:09:39.213766 6 log.go:172] (0xc00328cdc0) (0xc0024b05a0) Stream removed, broadcasting: 1 I0403 13:09:39.213869 6 log.go:172] (0xc00328cdc0) (0xc0024b05a0) Stream removed, broadcasting: 1 I0403 13:09:39.213888 6 log.go:172] (0xc00328cdc0) (0xc001256be0) Stream removed, broadcasting: 3 I0403 13:09:39.214062 6 log.go:172] (0xc00328cdc0) (0xc001256c80) Stream removed, broadcasting: 5 I0403 13:09:39.214211 6 log.go:172] (0xc00328cdc0) Go away received Apr 3 13:09:39.214: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:09:39.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-8131" for this suite. Apr 3 13:10:25.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:10:25.334: INFO: namespace e2e-kubelet-etc-hosts-8131 deletion completed in 46.116555764s • [SLOW TEST:57.287 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:10:25.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode Apr 3 13:10:25.396: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-4397" to be "success or failure" Apr 3 13:10:25.438: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 42.288168ms Apr 3 13:10:27.441: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045668251s Apr 3 13:10:29.444: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048549484s STEP: Saw pod success Apr 3 13:10:29.444: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Apr 3 13:10:29.447: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod Apr 3 13:10:29.475: INFO: Waiting for pod pod-host-path-test to disappear Apr 3 13:10:29.684: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:10:29.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-4397" for this suite. Apr 3 13:10:35.712: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:10:35.807: INFO: namespace hostpath-4397 deletion completed in 6.119155356s • [SLOW TEST:10.472 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:10:35.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 13:10:35.899: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 5.639921ms) Apr 3 13:10:35.918: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 19.471871ms) Apr 3 13:10:35.922: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.371033ms) Apr 3 13:10:35.926: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.011642ms) Apr 3 13:10:35.929: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.784335ms) Apr 3 13:10:35.933: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.645782ms) Apr 3 13:10:35.939: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 6.119424ms) Apr 3 13:10:35.942: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.185591ms) Apr 3 13:10:35.945: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.460837ms) Apr 3 13:10:35.948: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.896774ms) Apr 3 13:10:35.950: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.446676ms) Apr 3 13:10:35.953: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.608449ms) Apr 3 13:10:35.955: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.374005ms) Apr 3 13:10:35.958: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.792127ms) Apr 3 13:10:35.961: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.116068ms) Apr 3 13:10:35.964: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.974454ms) Apr 3 13:10:35.967: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.877671ms) Apr 3 13:10:35.970: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.144145ms) Apr 3 13:10:35.974: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.241913ms) Apr 3 13:10:35.977: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.120828ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:10:35.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-9300" for this suite. Apr 3 13:10:41.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:10:42.083: INFO: namespace proxy-9300 deletion completed in 6.102540553s • [SLOW TEST:6.276 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:10:42.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod Apr 3 13:10:46.213: INFO: Pod pod-hostip-f76c8dea-7c3d-45c4-8693-9a7658864c04 has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:10:46.213: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-302" for this suite. Apr 3 13:11:08.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:11:08.313: INFO: namespace pods-302 deletion completed in 22.095053735s • [SLOW TEST:26.229 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:11:08.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-bdfeb294-3e7e-4ac0-ae2b-dd0d4824a07f STEP: Creating a pod to test consume secrets Apr 3 13:11:08.486: INFO: Waiting up to 5m0s for pod "pod-secrets-ba7e976f-d86a-4c9e-96f0-26b105d4281d" in namespace "secrets-7773" to be "success or failure" Apr 3 13:11:08.490: INFO: Pod "pod-secrets-ba7e976f-d86a-4c9e-96f0-26b105d4281d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118007ms Apr 3 13:11:10.523: INFO: Pod "pod-secrets-ba7e976f-d86a-4c9e-96f0-26b105d4281d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036458496s Apr 3 13:11:12.527: INFO: Pod "pod-secrets-ba7e976f-d86a-4c9e-96f0-26b105d4281d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040614603s STEP: Saw pod success Apr 3 13:11:12.527: INFO: Pod "pod-secrets-ba7e976f-d86a-4c9e-96f0-26b105d4281d" satisfied condition "success or failure" Apr 3 13:11:12.530: INFO: Trying to get logs from node iruya-worker pod pod-secrets-ba7e976f-d86a-4c9e-96f0-26b105d4281d container secret-volume-test: STEP: delete the pod Apr 3 13:11:12.573: INFO: Waiting for pod pod-secrets-ba7e976f-d86a-4c9e-96f0-26b105d4281d to disappear Apr 3 13:11:12.592: INFO: Pod pod-secrets-ba7e976f-d86a-4c9e-96f0-26b105d4281d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:11:12.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7773" for this suite. Apr 3 13:11:18.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:11:18.696: INFO: namespace secrets-7773 deletion completed in 6.099912373s STEP: Destroying namespace "secret-namespace-9127" for this suite. Apr 3 13:11:24.706: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:11:24.837: INFO: namespace secret-namespace-9127 deletion completed in 6.140611713s • [SLOW TEST:16.524 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:11:24.837: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-6b88ddfe-c5d4-4f5f-8f54-8df47c4604c5 STEP: Creating configMap with name cm-test-opt-upd-b8f8920c-bdf0-4827-ac13-591495833638 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6b88ddfe-c5d4-4f5f-8f54-8df47c4604c5 STEP: Updating configmap cm-test-opt-upd-b8f8920c-bdf0-4827-ac13-591495833638 STEP: Creating configMap with name cm-test-opt-create-40478eb5-1476-40d9-bf92-b21f364617f1 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:12:55.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8310" for this suite. Apr 3 13:13:17.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:13:17.500: INFO: namespace configmap-8310 deletion completed in 22.088340465s • [SLOW TEST:112.663 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:13:17.500: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 3 13:13:17.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7090' Apr 3 13:13:20.348: INFO: stderr: "" Apr 3 13:13:20.348: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 3 13:13:21.399: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:13:21.399: INFO: Found 0 / 1 Apr 3 13:13:22.353: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:13:22.353: INFO: Found 0 / 1 Apr 3 13:13:23.352: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:13:23.352: INFO: Found 0 / 1 Apr 3 13:13:24.353: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:13:24.353: INFO: Found 1 / 1 Apr 3 13:13:24.353: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods Apr 3 13:13:24.357: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:13:24.357: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 3 13:13:24.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-7td2d --namespace=kubectl-7090 -p {"metadata":{"annotations":{"x":"y"}}}' Apr 3 13:13:24.460: INFO: stderr: "" Apr 3 13:13:24.460: INFO: stdout: "pod/redis-master-7td2d patched\n" STEP: checking annotations Apr 3 13:13:24.463: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:13:24.463: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:13:24.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7090" for this suite. Apr 3 13:13:46.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:13:46.558: INFO: namespace kubectl-7090 deletion completed in 22.09169696s • [SLOW TEST:29.058 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:13:46.558: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-dea24cbd-7cc5-4414-bc9c-5edd8bb848d4 in namespace container-probe-5868 Apr 3 13:13:50.623: INFO: Started pod liveness-dea24cbd-7cc5-4414-bc9c-5edd8bb848d4 in namespace container-probe-5868 STEP: checking the pod's current state and verifying that restartCount is present Apr 3 13:13:50.626: INFO: Initial restart count of pod liveness-dea24cbd-7cc5-4414-bc9c-5edd8bb848d4 is 0 Apr 3 13:14:10.670: INFO: Restart count of pod container-probe-5868/liveness-dea24cbd-7cc5-4414-bc9c-5edd8bb848d4 is now 1 (20.044542366s elapsed) Apr 3 13:14:30.711: INFO: Restart count of pod container-probe-5868/liveness-dea24cbd-7cc5-4414-bc9c-5edd8bb848d4 is now 2 (40.084838485s elapsed) Apr 3 13:14:50.751: INFO: Restart count of pod container-probe-5868/liveness-dea24cbd-7cc5-4414-bc9c-5edd8bb848d4 is now 3 (1m0.125280038s elapsed) Apr 3 13:15:10.793: INFO: Restart count of pod container-probe-5868/liveness-dea24cbd-7cc5-4414-bc9c-5edd8bb848d4 is now 4 (1m20.167448925s elapsed) Apr 3 13:16:20.953: INFO: Restart count of pod container-probe-5868/liveness-dea24cbd-7cc5-4414-bc9c-5edd8bb848d4 is now 5 (2m30.327429777s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:16:20.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5868" for this suite. Apr 3 13:16:26.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:16:27.061: INFO: namespace container-probe-5868 deletion completed in 6.091758634s • [SLOW TEST:160.503 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:16:27.062: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Apr 3 13:16:31.669: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9402 pod-service-account-bb7b5a9c-4f90-49a4-b525-9af2320eaf2a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Apr 3 13:16:31.894: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9402 pod-service-account-bb7b5a9c-4f90-49a4-b525-9af2320eaf2a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Apr 3 13:16:32.105: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-9402 pod-service-account-bb7b5a9c-4f90-49a4-b525-9af2320eaf2a -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:16:32.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-9402" for this suite. Apr 3 13:16:38.327: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:16:38.410: INFO: namespace svcaccounts-9402 deletion completed in 6.110788258s • [SLOW TEST:11.348 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:16:38.411: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 3 13:16:38.469: INFO: Waiting up to 5m0s for pod "pod-7ad8fa11-f5cf-4c6c-a261-59f5f7abda35" in namespace "emptydir-4736" to be "success or failure" Apr 3 13:16:38.479: INFO: Pod "pod-7ad8fa11-f5cf-4c6c-a261-59f5f7abda35": Phase="Pending", Reason="", readiness=false. Elapsed: 9.486735ms Apr 3 13:16:40.485: INFO: Pod "pod-7ad8fa11-f5cf-4c6c-a261-59f5f7abda35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015800877s Apr 3 13:16:42.489: INFO: Pod "pod-7ad8fa11-f5cf-4c6c-a261-59f5f7abda35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020083857s STEP: Saw pod success Apr 3 13:16:42.489: INFO: Pod "pod-7ad8fa11-f5cf-4c6c-a261-59f5f7abda35" satisfied condition "success or failure" Apr 3 13:16:42.492: INFO: Trying to get logs from node iruya-worker2 pod pod-7ad8fa11-f5cf-4c6c-a261-59f5f7abda35 container test-container: STEP: delete the pod Apr 3 13:16:42.516: INFO: Waiting for pod pod-7ad8fa11-f5cf-4c6c-a261-59f5f7abda35 to disappear Apr 3 13:16:42.527: INFO: Pod pod-7ad8fa11-f5cf-4c6c-a261-59f5f7abda35 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:16:42.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4736" for this suite. Apr 3 13:16:48.555: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:16:48.642: INFO: namespace emptydir-4736 deletion completed in 6.110905932s • [SLOW TEST:10.231 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:16:48.642: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 13:16:48.683: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a5bda80-5039-498f-a80b-46feeb76de20" in namespace "projected-2374" to be "success or failure" Apr 3 13:16:48.698: INFO: Pod "downwardapi-volume-8a5bda80-5039-498f-a80b-46feeb76de20": Phase="Pending", Reason="", readiness=false. Elapsed: 15.13157ms Apr 3 13:16:50.703: INFO: Pod "downwardapi-volume-8a5bda80-5039-498f-a80b-46feeb76de20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019810074s Apr 3 13:16:52.707: INFO: Pod "downwardapi-volume-8a5bda80-5039-498f-a80b-46feeb76de20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023888706s STEP: Saw pod success Apr 3 13:16:52.707: INFO: Pod "downwardapi-volume-8a5bda80-5039-498f-a80b-46feeb76de20" satisfied condition "success or failure" Apr 3 13:16:52.710: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8a5bda80-5039-498f-a80b-46feeb76de20 container client-container: STEP: delete the pod Apr 3 13:16:52.781: INFO: Waiting for pod downwardapi-volume-8a5bda80-5039-498f-a80b-46feeb76de20 to disappear Apr 3 13:16:52.789: INFO: Pod downwardapi-volume-8a5bda80-5039-498f-a80b-46feeb76de20 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:16:52.789: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2374" for this suite. Apr 3 13:16:58.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:16:58.896: INFO: namespace projected-2374 deletion completed in 6.103889834s • [SLOW TEST:10.253 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:16:58.896: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command Apr 3 13:16:58.984: INFO: Waiting up to 5m0s for pod "var-expansion-70b0a8f0-ae21-43b9-97fe-2e3c6e268e17" in namespace "var-expansion-8734" to be "success or failure" Apr 3 13:16:58.993: INFO: Pod "var-expansion-70b0a8f0-ae21-43b9-97fe-2e3c6e268e17": Phase="Pending", Reason="", readiness=false. Elapsed: 8.451244ms Apr 3 13:17:01.012: INFO: Pod "var-expansion-70b0a8f0-ae21-43b9-97fe-2e3c6e268e17": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028305902s Apr 3 13:17:03.016: INFO: Pod "var-expansion-70b0a8f0-ae21-43b9-97fe-2e3c6e268e17": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032254679s STEP: Saw pod success Apr 3 13:17:03.016: INFO: Pod "var-expansion-70b0a8f0-ae21-43b9-97fe-2e3c6e268e17" satisfied condition "success or failure" Apr 3 13:17:03.019: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-70b0a8f0-ae21-43b9-97fe-2e3c6e268e17 container dapi-container: STEP: delete the pod Apr 3 13:17:03.036: INFO: Waiting for pod var-expansion-70b0a8f0-ae21-43b9-97fe-2e3c6e268e17 to disappear Apr 3 13:17:03.040: INFO: Pod var-expansion-70b0a8f0-ae21-43b9-97fe-2e3c6e268e17 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:17:03.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8734" for this suite. Apr 3 13:17:09.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:17:09.160: INFO: namespace var-expansion-8734 deletion completed in 6.11645119s • [SLOW TEST:10.264 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:17:09.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 13:17:09.256: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. Apr 3 13:17:09.264: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:09.282: INFO: Number of nodes with available pods: 0 Apr 3 13:17:09.282: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:17:10.286: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:10.289: INFO: Number of nodes with available pods: 0 Apr 3 13:17:10.289: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:17:11.336: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:11.340: INFO: Number of nodes with available pods: 0 Apr 3 13:17:11.340: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:17:12.287: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:12.291: INFO: Number of nodes with available pods: 0 Apr 3 13:17:12.291: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:17:13.287: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:13.291: INFO: Number of nodes with available pods: 2 Apr 3 13:17:13.291: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. Apr 3 13:17:13.318: INFO: Wrong image for pod: daemon-set-4zp2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 3 13:17:13.318: INFO: Wrong image for pod: daemon-set-hhp7n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 3 13:17:13.338: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:14.342: INFO: Wrong image for pod: daemon-set-4zp2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 3 13:17:14.342: INFO: Wrong image for pod: daemon-set-hhp7n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 3 13:17:14.345: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:15.344: INFO: Wrong image for pod: daemon-set-4zp2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 3 13:17:15.344: INFO: Wrong image for pod: daemon-set-hhp7n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 3 13:17:15.349: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:16.342: INFO: Wrong image for pod: daemon-set-4zp2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 3 13:17:16.342: INFO: Wrong image for pod: daemon-set-hhp7n. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 3 13:17:16.342: INFO: Pod daemon-set-hhp7n is not available Apr 3 13:17:16.345: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:17.343: INFO: Wrong image for pod: daemon-set-4zp2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 3 13:17:17.343: INFO: Pod daemon-set-vtglq is not available Apr 3 13:17:17.347: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:18.366: INFO: Wrong image for pod: daemon-set-4zp2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 3 13:17:18.367: INFO: Pod daemon-set-vtglq is not available Apr 3 13:17:18.376: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:19.343: INFO: Wrong image for pod: daemon-set-4zp2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 3 13:17:19.343: INFO: Pod daemon-set-vtglq is not available Apr 3 13:17:19.354: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:20.343: INFO: Wrong image for pod: daemon-set-4zp2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 3 13:17:20.347: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:21.343: INFO: Wrong image for pod: daemon-set-4zp2v. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. Apr 3 13:17:21.343: INFO: Pod daemon-set-4zp2v is not available Apr 3 13:17:21.349: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:22.342: INFO: Pod daemon-set-5vdfp is not available Apr 3 13:17:22.345: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. Apr 3 13:17:22.349: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:22.351: INFO: Number of nodes with available pods: 1 Apr 3 13:17:22.351: INFO: Node iruya-worker2 is running more than one daemon pod Apr 3 13:17:23.374: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:23.376: INFO: Number of nodes with available pods: 1 Apr 3 13:17:23.376: INFO: Node iruya-worker2 is running more than one daemon pod Apr 3 13:17:24.356: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:24.360: INFO: Number of nodes with available pods: 1 Apr 3 13:17:24.360: INFO: Node iruya-worker2 is running more than one daemon pod Apr 3 13:17:25.356: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:17:25.360: INFO: Number of nodes with available pods: 2 Apr 3 13:17:25.360: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5431, will wait for the garbage collector to delete the pods Apr 3 13:17:25.432: INFO: Deleting DaemonSet.extensions daemon-set took: 6.070513ms Apr 3 13:17:25.732: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.258256ms Apr 3 13:17:32.264: INFO: Number of nodes with available pods: 0 Apr 3 13:17:32.264: INFO: Number of running nodes: 0, number of available pods: 0 Apr 3 13:17:32.267: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5431/daemonsets","resourceVersion":"3393834"},"items":null} Apr 3 13:17:32.269: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5431/pods","resourceVersion":"3393834"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:17:32.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5431" for this suite. Apr 3 13:17:38.300: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:17:38.381: INFO: namespace daemonsets-5431 deletion completed in 6.100057126s • [SLOW TEST:29.220 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:17:38.382: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:17:43.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7661" for this suite. Apr 3 13:18:05.514: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:18:05.579: INFO: namespace replication-controller-7661 deletion completed in 22.079219609s • [SLOW TEST:27.197 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:18:05.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-eba16115-2825-47b0-b9ba-bdc2e4a450a9 STEP: Creating a pod to test consume secrets Apr 3 13:18:05.684: INFO: Waiting up to 5m0s for pod "pod-secrets-752ece82-75c8-4f9c-80cc-67d10ab88fd0" in namespace "secrets-8370" to be "success or failure" Apr 3 13:18:05.687: INFO: Pod "pod-secrets-752ece82-75c8-4f9c-80cc-67d10ab88fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.690047ms Apr 3 13:18:07.691: INFO: Pod "pod-secrets-752ece82-75c8-4f9c-80cc-67d10ab88fd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006685291s Apr 3 13:18:09.695: INFO: Pod "pod-secrets-752ece82-75c8-4f9c-80cc-67d10ab88fd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011112972s STEP: Saw pod success Apr 3 13:18:09.695: INFO: Pod "pod-secrets-752ece82-75c8-4f9c-80cc-67d10ab88fd0" satisfied condition "success or failure" Apr 3 13:18:09.699: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-752ece82-75c8-4f9c-80cc-67d10ab88fd0 container secret-volume-test: STEP: delete the pod Apr 3 13:18:09.720: INFO: Waiting for pod pod-secrets-752ece82-75c8-4f9c-80cc-67d10ab88fd0 to disappear Apr 3 13:18:09.724: INFO: Pod pod-secrets-752ece82-75c8-4f9c-80cc-67d10ab88fd0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:18:09.724: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8370" for this suite. Apr 3 13:18:15.740: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:18:15.820: INFO: namespace secrets-8370 deletion completed in 6.091591349s • [SLOW TEST:10.240 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:18:15.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 13:18:15.879: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:18:16.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-621" for this suite. Apr 3 13:18:22.999: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:18:23.083: INFO: namespace custom-resource-definition-621 deletion completed in 6.099962969s • [SLOW TEST:7.262 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:18:23.084: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:19:23.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7165" for this suite. Apr 3 13:19:45.165: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:19:45.262: INFO: namespace container-probe-7165 deletion completed in 22.111252258s • [SLOW TEST:82.177 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:19:45.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 13:19:45.371: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2a46c3ae-6051-4b2c-b5b9-1dfcf2582116" in namespace "downward-api-2935" to be "success or failure" Apr 3 13:19:45.375: INFO: Pod "downwardapi-volume-2a46c3ae-6051-4b2c-b5b9-1dfcf2582116": Phase="Pending", Reason="", readiness=false. Elapsed: 3.246727ms Apr 3 13:19:47.378: INFO: Pod "downwardapi-volume-2a46c3ae-6051-4b2c-b5b9-1dfcf2582116": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00666593s Apr 3 13:19:49.382: INFO: Pod "downwardapi-volume-2a46c3ae-6051-4b2c-b5b9-1dfcf2582116": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010485455s STEP: Saw pod success Apr 3 13:19:49.382: INFO: Pod "downwardapi-volume-2a46c3ae-6051-4b2c-b5b9-1dfcf2582116" satisfied condition "success or failure" Apr 3 13:19:49.385: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2a46c3ae-6051-4b2c-b5b9-1dfcf2582116 container client-container: STEP: delete the pod Apr 3 13:19:49.400: INFO: Waiting for pod downwardapi-volume-2a46c3ae-6051-4b2c-b5b9-1dfcf2582116 to disappear Apr 3 13:19:49.418: INFO: Pod downwardapi-volume-2a46c3ae-6051-4b2c-b5b9-1dfcf2582116 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:19:49.418: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2935" for this suite. Apr 3 13:19:55.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:19:55.522: INFO: namespace downward-api-2935 deletion completed in 6.100385191s • [SLOW TEST:10.260 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:19:55.522: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all Apr 3 13:19:55.566: INFO: Waiting up to 5m0s for pod "client-containers-99cbe21a-d99e-4eba-8d8b-903965eb4975" in namespace "containers-5493" to be "success or failure" Apr 3 13:19:55.583: INFO: Pod "client-containers-99cbe21a-d99e-4eba-8d8b-903965eb4975": Phase="Pending", Reason="", readiness=false. Elapsed: 16.323413ms Apr 3 13:19:57.613: INFO: Pod "client-containers-99cbe21a-d99e-4eba-8d8b-903965eb4975": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047121392s Apr 3 13:19:59.618: INFO: Pod "client-containers-99cbe21a-d99e-4eba-8d8b-903965eb4975": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051418522s STEP: Saw pod success Apr 3 13:19:59.618: INFO: Pod "client-containers-99cbe21a-d99e-4eba-8d8b-903965eb4975" satisfied condition "success or failure" Apr 3 13:19:59.621: INFO: Trying to get logs from node iruya-worker pod client-containers-99cbe21a-d99e-4eba-8d8b-903965eb4975 container test-container: STEP: delete the pod Apr 3 13:19:59.677: INFO: Waiting for pod client-containers-99cbe21a-d99e-4eba-8d8b-903965eb4975 to disappear Apr 3 13:19:59.714: INFO: Pod client-containers-99cbe21a-d99e-4eba-8d8b-903965eb4975 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:19:59.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5493" for this suite. Apr 3 13:20:05.730: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:20:05.807: INFO: namespace containers-5493 deletion completed in 6.088319506s • [SLOW TEST:10.284 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:20:05.807: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 13:20:05.869: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota Apr 3 13:20:07.921: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:20:07.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2337" for this suite. Apr 3 13:20:13.975: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:20:14.053: INFO: namespace replication-controller-2337 deletion completed in 6.098411064s • [SLOW TEST:8.246 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:20:14.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-6492 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6492 to expose endpoints map[] Apr 3 13:20:14.277: INFO: Get endpoints failed (11.629813ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found Apr 3 13:20:15.281: INFO: successfully validated that service multi-endpoint-test in namespace services-6492 exposes endpoints map[] (1.014842743s elapsed) STEP: Creating pod pod1 in namespace services-6492 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6492 to expose endpoints map[pod1:[100]] Apr 3 13:20:18.390: INFO: successfully validated that service multi-endpoint-test in namespace services-6492 exposes endpoints map[pod1:[100]] (3.102438013s elapsed) STEP: Creating pod pod2 in namespace services-6492 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6492 to expose endpoints map[pod1:[100] pod2:[101]] Apr 3 13:20:21.449: INFO: successfully validated that service multi-endpoint-test in namespace services-6492 exposes endpoints map[pod1:[100] pod2:[101]] (3.055360418s elapsed) STEP: Deleting pod pod1 in namespace services-6492 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6492 to expose endpoints map[pod2:[101]] Apr 3 13:20:22.471: INFO: successfully validated that service multi-endpoint-test in namespace services-6492 exposes endpoints map[pod2:[101]] (1.01671544s elapsed) STEP: Deleting pod pod2 in namespace services-6492 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-6492 to expose endpoints map[] Apr 3 13:20:23.485: INFO: successfully validated that service multi-endpoint-test in namespace services-6492 exposes endpoints map[] (1.00919675s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:20:23.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6492" for this suite. Apr 3 13:20:45.659: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:20:45.735: INFO: namespace services-6492 deletion completed in 22.122115752s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:31.682 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:20:45.736: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-712aa104-8521-4a5f-ad79-14c7c788a1f5 STEP: Creating a pod to test consume configMaps Apr 3 13:20:45.797: INFO: Waiting up to 5m0s for pod "pod-configmaps-4723c372-9c6a-4be9-8ce8-d7654cfd441e" in namespace "configmap-5297" to be "success or failure" Apr 3 13:20:45.842: INFO: Pod "pod-configmaps-4723c372-9c6a-4be9-8ce8-d7654cfd441e": Phase="Pending", Reason="", readiness=false. Elapsed: 44.979798ms Apr 3 13:20:47.992: INFO: Pod "pod-configmaps-4723c372-9c6a-4be9-8ce8-d7654cfd441e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.194741885s Apr 3 13:20:49.996: INFO: Pod "pod-configmaps-4723c372-9c6a-4be9-8ce8-d7654cfd441e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.199232074s STEP: Saw pod success Apr 3 13:20:49.996: INFO: Pod "pod-configmaps-4723c372-9c6a-4be9-8ce8-d7654cfd441e" satisfied condition "success or failure" Apr 3 13:20:50.000: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-4723c372-9c6a-4be9-8ce8-d7654cfd441e container configmap-volume-test: STEP: delete the pod Apr 3 13:20:50.019: INFO: Waiting for pod pod-configmaps-4723c372-9c6a-4be9-8ce8-d7654cfd441e to disappear Apr 3 13:20:50.039: INFO: Pod pod-configmaps-4723c372-9c6a-4be9-8ce8-d7654cfd441e no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:20:50.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5297" for this suite. Apr 3 13:20:56.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:20:56.150: INFO: namespace configmap-5297 deletion completed in 6.107509258s • [SLOW TEST:10.414 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:20:56.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods Apr 3 13:20:56.844: INFO: Pod name wrapped-volume-race-af022fbe-f743-49f6-b84e-952be78edb2b: Found 0 pods out of 5 Apr 3 13:21:01.853: INFO: Pod name wrapped-volume-race-af022fbe-f743-49f6-b84e-952be78edb2b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-af022fbe-f743-49f6-b84e-952be78edb2b in namespace emptydir-wrapper-82, will wait for the garbage collector to delete the pods Apr 3 13:21:15.936: INFO: Deleting ReplicationController wrapped-volume-race-af022fbe-f743-49f6-b84e-952be78edb2b took: 6.887366ms Apr 3 13:21:16.236: INFO: Terminating ReplicationController wrapped-volume-race-af022fbe-f743-49f6-b84e-952be78edb2b pods took: 300.2231ms STEP: Creating RC which spawns configmap-volume pods Apr 3 13:21:53.184: INFO: Pod name wrapped-volume-race-abf23045-4f71-445d-b03a-0d6f4db87696: Found 0 pods out of 5 Apr 3 13:21:58.191: INFO: Pod name wrapped-volume-race-abf23045-4f71-445d-b03a-0d6f4db87696: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-abf23045-4f71-445d-b03a-0d6f4db87696 in namespace emptydir-wrapper-82, will wait for the garbage collector to delete the pods Apr 3 13:22:12.275: INFO: Deleting ReplicationController wrapped-volume-race-abf23045-4f71-445d-b03a-0d6f4db87696 took: 6.395715ms Apr 3 13:22:12.576: INFO: Terminating ReplicationController wrapped-volume-race-abf23045-4f71-445d-b03a-0d6f4db87696 pods took: 300.286139ms STEP: Creating RC which spawns configmap-volume pods Apr 3 13:22:53.214: INFO: Pod name wrapped-volume-race-5f7667af-7638-4e08-a540-0287500548a0: Found 0 pods out of 5 Apr 3 13:22:58.220: INFO: Pod name wrapped-volume-race-5f7667af-7638-4e08-a540-0287500548a0: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-5f7667af-7638-4e08-a540-0287500548a0 in namespace emptydir-wrapper-82, will wait for the garbage collector to delete the pods Apr 3 13:23:12.333: INFO: Deleting ReplicationController wrapped-volume-race-5f7667af-7638-4e08-a540-0287500548a0 took: 17.524036ms Apr 3 13:23:12.633: INFO: Terminating ReplicationController wrapped-volume-race-5f7667af-7638-4e08-a540-0287500548a0 pods took: 300.275743ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:23:53.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-82" for this suite. Apr 3 13:24:01.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:24:01.356: INFO: namespace emptydir-wrapper-82 deletion completed in 8.096216949s • [SLOW TEST:185.206 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:24:01.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-eaa3f6b6-d16c-46df-8ab4-32ce25eed0cf STEP: Creating a pod to test consume configMaps Apr 3 13:24:01.436: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e2662dc0-f536-44ee-80df-2355820bd208" in namespace "projected-2082" to be "success or failure" Apr 3 13:24:01.440: INFO: Pod "pod-projected-configmaps-e2662dc0-f536-44ee-80df-2355820bd208": Phase="Pending", Reason="", readiness=false. Elapsed: 3.879366ms Apr 3 13:24:03.444: INFO: Pod "pod-projected-configmaps-e2662dc0-f536-44ee-80df-2355820bd208": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00781732s Apr 3 13:24:05.449: INFO: Pod "pod-projected-configmaps-e2662dc0-f536-44ee-80df-2355820bd208": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012067497s STEP: Saw pod success Apr 3 13:24:05.449: INFO: Pod "pod-projected-configmaps-e2662dc0-f536-44ee-80df-2355820bd208" satisfied condition "success or failure" Apr 3 13:24:05.452: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-e2662dc0-f536-44ee-80df-2355820bd208 container projected-configmap-volume-test: STEP: delete the pod Apr 3 13:24:05.486: INFO: Waiting for pod pod-projected-configmaps-e2662dc0-f536-44ee-80df-2355820bd208 to disappear Apr 3 13:24:05.498: INFO: Pod pod-projected-configmaps-e2662dc0-f536-44ee-80df-2355820bd208 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:24:05.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2082" for this suite. Apr 3 13:24:11.520: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:24:11.602: INFO: namespace projected-2082 deletion completed in 6.100279551s • [SLOW TEST:10.246 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:24:11.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs Apr 3 13:24:11.691: INFO: Waiting up to 5m0s for pod "pod-2be436f1-f66a-4ea7-9766-610d7e5c7443" in namespace "emptydir-3839" to be "success or failure" Apr 3 13:24:11.696: INFO: Pod "pod-2be436f1-f66a-4ea7-9766-610d7e5c7443": Phase="Pending", Reason="", readiness=false. Elapsed: 4.987694ms Apr 3 13:24:13.738: INFO: Pod "pod-2be436f1-f66a-4ea7-9766-610d7e5c7443": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046930798s Apr 3 13:24:15.742: INFO: Pod "pod-2be436f1-f66a-4ea7-9766-610d7e5c7443": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051224906s STEP: Saw pod success Apr 3 13:24:15.742: INFO: Pod "pod-2be436f1-f66a-4ea7-9766-610d7e5c7443" satisfied condition "success or failure" Apr 3 13:24:15.745: INFO: Trying to get logs from node iruya-worker2 pod pod-2be436f1-f66a-4ea7-9766-610d7e5c7443 container test-container: STEP: delete the pod Apr 3 13:24:15.786: INFO: Waiting for pod pod-2be436f1-f66a-4ea7-9766-610d7e5c7443 to disappear Apr 3 13:24:15.792: INFO: Pod pod-2be436f1-f66a-4ea7-9766-610d7e5c7443 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:24:15.792: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3839" for this suite. Apr 3 13:24:21.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:24:21.897: INFO: namespace emptydir-3839 deletion completed in 6.101850499s • [SLOW TEST:10.296 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:24:21.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3168 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3168 STEP: Creating statefulset with conflicting port in namespace statefulset-3168 STEP: Waiting until pod test-pod will start running in namespace statefulset-3168 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3168 Apr 3 13:24:26.003: INFO: Observed stateful pod in namespace: statefulset-3168, name: ss-0, uid: 35af7fec-3c30-429a-866d-6411afec20d0, status phase: Pending. Waiting for statefulset controller to delete. Apr 3 13:24:32.149: INFO: Observed stateful pod in namespace: statefulset-3168, name: ss-0, uid: 35af7fec-3c30-429a-866d-6411afec20d0, status phase: Failed. Waiting for statefulset controller to delete. Apr 3 13:24:32.167: INFO: Observed stateful pod in namespace: statefulset-3168, name: ss-0, uid: 35af7fec-3c30-429a-866d-6411afec20d0, status phase: Failed. Waiting for statefulset controller to delete. Apr 3 13:24:32.193: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3168 STEP: Removing pod with conflicting port in namespace statefulset-3168 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3168 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 3 13:24:36.282: INFO: Deleting all statefulset in ns statefulset-3168 Apr 3 13:24:36.286: INFO: Scaling statefulset ss to 0 Apr 3 13:24:46.303: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 13:24:46.306: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:24:46.321: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3168" for this suite. Apr 3 13:24:52.336: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:24:52.414: INFO: namespace statefulset-3168 deletion completed in 6.088686143s • [SLOW TEST:30.516 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:24:52.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Apr 3 13:25:00.550: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:00.566: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:02.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:02.570: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:04.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:04.569: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:06.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:06.570: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:08.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:08.584: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:10.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:10.570: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:12.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:12.570: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:14.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:14.570: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:16.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:16.571: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:18.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:18.571: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:20.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:20.571: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:22.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:22.571: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:24.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:24.571: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:26.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:26.571: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:28.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:28.571: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:30.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:30.571: INFO: Pod pod-with-prestop-exec-hook still exists Apr 3 13:25:32.567: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Apr 3 13:25:32.571: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:25:32.576: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-328" for this suite. Apr 3 13:25:54.590: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:25:54.665: INFO: namespace container-lifecycle-hook-328 deletion completed in 22.086601131s • [SLOW TEST:62.251 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:25:54.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs Apr 3 13:25:54.718: INFO: Waiting up to 5m0s for pod "pod-51b37fa5-dbf8-4375-97ad-27618279d214" in namespace "emptydir-6469" to be "success or failure" Apr 3 13:25:54.728: INFO: Pod "pod-51b37fa5-dbf8-4375-97ad-27618279d214": Phase="Pending", Reason="", readiness=false. Elapsed: 9.750696ms Apr 3 13:25:56.767: INFO: Pod "pod-51b37fa5-dbf8-4375-97ad-27618279d214": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048758173s Apr 3 13:25:58.772: INFO: Pod "pod-51b37fa5-dbf8-4375-97ad-27618279d214": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.053307847s STEP: Saw pod success Apr 3 13:25:58.772: INFO: Pod "pod-51b37fa5-dbf8-4375-97ad-27618279d214" satisfied condition "success or failure" Apr 3 13:25:58.775: INFO: Trying to get logs from node iruya-worker pod pod-51b37fa5-dbf8-4375-97ad-27618279d214 container test-container: STEP: delete the pod Apr 3 13:25:58.830: INFO: Waiting for pod pod-51b37fa5-dbf8-4375-97ad-27618279d214 to disappear Apr 3 13:25:58.842: INFO: Pod pod-51b37fa5-dbf8-4375-97ad-27618279d214 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:25:58.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6469" for this suite. Apr 3 13:26:04.876: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:26:04.951: INFO: namespace emptydir-6469 deletion completed in 6.106077549s • [SLOW TEST:10.286 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:26:04.953: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4097 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-4097 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4097 Apr 3 13:26:05.064: INFO: Found 0 stateful pods, waiting for 1 Apr 3 13:26:15.068: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod Apr 3 13:26:15.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4097 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 3 13:26:17.789: INFO: stderr: "I0403 13:26:17.677046 985 log.go:172] (0xc000130e70) (0xc00070aaa0) Create stream\nI0403 13:26:17.677083 985 log.go:172] (0xc000130e70) (0xc00070aaa0) Stream added, broadcasting: 1\nI0403 13:26:17.679959 985 log.go:172] (0xc000130e70) Reply frame received for 1\nI0403 13:26:17.679998 985 log.go:172] (0xc000130e70) (0xc00070ab40) Create stream\nI0403 13:26:17.680009 985 log.go:172] (0xc000130e70) (0xc00070ab40) Stream added, broadcasting: 3\nI0403 13:26:17.680821 985 log.go:172] (0xc000130e70) Reply frame received for 3\nI0403 13:26:17.680863 985 log.go:172] (0xc000130e70) (0xc000980000) Create stream\nI0403 13:26:17.680886 985 log.go:172] (0xc000130e70) (0xc000980000) Stream added, broadcasting: 5\nI0403 13:26:17.681898 985 log.go:172] (0xc000130e70) Reply frame received for 5\nI0403 13:26:17.745389 985 log.go:172] (0xc000130e70) Data frame received for 5\nI0403 13:26:17.745431 985 log.go:172] (0xc000980000) (5) Data frame handling\nI0403 13:26:17.745462 985 log.go:172] (0xc000980000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0403 13:26:17.780237 985 log.go:172] (0xc000130e70) Data frame received for 3\nI0403 13:26:17.780283 985 log.go:172] (0xc00070ab40) (3) Data frame handling\nI0403 13:26:17.780354 985 log.go:172] (0xc00070ab40) (3) Data frame sent\nI0403 13:26:17.780622 985 log.go:172] (0xc000130e70) Data frame received for 5\nI0403 13:26:17.780659 985 log.go:172] (0xc000980000) (5) Data frame handling\nI0403 13:26:17.780685 985 log.go:172] (0xc000130e70) Data frame received for 3\nI0403 13:26:17.780698 985 log.go:172] (0xc00070ab40) (3) Data frame handling\nI0403 13:26:17.782142 985 log.go:172] (0xc000130e70) Data frame received for 1\nI0403 13:26:17.782167 985 log.go:172] (0xc00070aaa0) (1) Data frame handling\nI0403 13:26:17.782202 985 log.go:172] (0xc00070aaa0) (1) Data frame sent\nI0403 13:26:17.782237 985 log.go:172] (0xc000130e70) (0xc00070aaa0) Stream removed, broadcasting: 1\nI0403 13:26:17.782285 985 log.go:172] (0xc000130e70) Go away received\nI0403 13:26:17.782728 985 log.go:172] (0xc000130e70) (0xc00070aaa0) Stream removed, broadcasting: 1\nI0403 13:26:17.782749 985 log.go:172] (0xc000130e70) (0xc00070ab40) Stream removed, broadcasting: 3\nI0403 13:26:17.782761 985 log.go:172] (0xc000130e70) (0xc000980000) Stream removed, broadcasting: 5\n" Apr 3 13:26:17.789: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 3 13:26:17.789: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 3 13:26:17.793: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 3 13:26:27.796: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 3 13:26:27.796: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 13:26:27.818: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999549s Apr 3 13:26:28.823: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990974847s Apr 3 13:26:29.827: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.986056305s Apr 3 13:26:30.832: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.98178745s Apr 3 13:26:31.836: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.977346937s Apr 3 13:26:32.841: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.972984572s Apr 3 13:26:33.846: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.967743425s Apr 3 13:26:34.852: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.962731701s Apr 3 13:26:35.856: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.957601314s Apr 3 13:26:36.861: INFO: Verifying statefulset ss doesn't scale past 1 for another 952.86681ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4097 Apr 3 13:26:37.866: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4097 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 13:26:38.072: INFO: stderr: "I0403 13:26:37.993279 1017 log.go:172] (0xc00097a580) (0xc0005e4be0) Create stream\nI0403 13:26:37.993336 1017 log.go:172] (0xc00097a580) (0xc0005e4be0) Stream added, broadcasting: 1\nI0403 13:26:37.995603 1017 log.go:172] (0xc00097a580) Reply frame received for 1\nI0403 13:26:37.995647 1017 log.go:172] (0xc00097a580) (0xc0005e4c80) Create stream\nI0403 13:26:37.995662 1017 log.go:172] (0xc00097a580) (0xc0005e4c80) Stream added, broadcasting: 3\nI0403 13:26:37.996654 1017 log.go:172] (0xc00097a580) Reply frame received for 3\nI0403 13:26:37.996701 1017 log.go:172] (0xc00097a580) (0xc000998000) Create stream\nI0403 13:26:37.996721 1017 log.go:172] (0xc00097a580) (0xc000998000) Stream added, broadcasting: 5\nI0403 13:26:37.998009 1017 log.go:172] (0xc00097a580) Reply frame received for 5\nI0403 13:26:38.064990 1017 log.go:172] (0xc00097a580) Data frame received for 5\nI0403 13:26:38.065016 1017 log.go:172] (0xc000998000) (5) Data frame handling\nI0403 13:26:38.065027 1017 log.go:172] (0xc000998000) (5) Data frame sent\nI0403 13:26:38.065038 1017 log.go:172] (0xc00097a580) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0403 13:26:38.065051 1017 log.go:172] (0xc000998000) (5) Data frame handling\nI0403 13:26:38.065314 1017 log.go:172] (0xc00097a580) Data frame received for 3\nI0403 13:26:38.065608 1017 log.go:172] (0xc0005e4c80) (3) Data frame handling\nI0403 13:26:38.065630 1017 log.go:172] (0xc0005e4c80) (3) Data frame sent\nI0403 13:26:38.065650 1017 log.go:172] (0xc00097a580) Data frame received for 3\nI0403 13:26:38.065676 1017 log.go:172] (0xc0005e4c80) (3) Data frame handling\nI0403 13:26:38.066824 1017 log.go:172] (0xc00097a580) Data frame received for 1\nI0403 13:26:38.066852 1017 log.go:172] (0xc0005e4be0) (1) Data frame handling\nI0403 13:26:38.066868 1017 log.go:172] (0xc0005e4be0) (1) Data frame sent\nI0403 13:26:38.066929 1017 log.go:172] (0xc00097a580) (0xc0005e4be0) Stream removed, broadcasting: 1\nI0403 13:26:38.067066 1017 log.go:172] (0xc00097a580) Go away received\nI0403 13:26:38.067597 1017 log.go:172] (0xc00097a580) (0xc0005e4be0) Stream removed, broadcasting: 1\nI0403 13:26:38.067616 1017 log.go:172] (0xc00097a580) (0xc0005e4c80) Stream removed, broadcasting: 3\nI0403 13:26:38.067624 1017 log.go:172] (0xc00097a580) (0xc000998000) Stream removed, broadcasting: 5\n" Apr 3 13:26:38.072: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 3 13:26:38.072: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 3 13:26:38.076: INFO: Found 1 stateful pods, waiting for 3 Apr 3 13:26:48.080: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 3 13:26:48.081: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 3 13:26:48.081: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod Apr 3 13:26:48.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4097 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 3 13:26:48.273: INFO: stderr: "I0403 13:26:48.211294 1037 log.go:172] (0xc000116fd0) (0xc00060c960) Create stream\nI0403 13:26:48.211347 1037 log.go:172] (0xc000116fd0) (0xc00060c960) Stream added, broadcasting: 1\nI0403 13:26:48.215494 1037 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0403 13:26:48.215562 1037 log.go:172] (0xc000116fd0) (0xc00060c0a0) Create stream\nI0403 13:26:48.215586 1037 log.go:172] (0xc000116fd0) (0xc00060c0a0) Stream added, broadcasting: 3\nI0403 13:26:48.216549 1037 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0403 13:26:48.216608 1037 log.go:172] (0xc000116fd0) (0xc00010e000) Create stream\nI0403 13:26:48.216627 1037 log.go:172] (0xc000116fd0) (0xc00010e000) Stream added, broadcasting: 5\nI0403 13:26:48.217753 1037 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0403 13:26:48.267257 1037 log.go:172] (0xc000116fd0) Data frame received for 3\nI0403 13:26:48.267294 1037 log.go:172] (0xc00060c0a0) (3) Data frame handling\nI0403 13:26:48.267316 1037 log.go:172] (0xc00060c0a0) (3) Data frame sent\nI0403 13:26:48.267328 1037 log.go:172] (0xc000116fd0) Data frame received for 3\nI0403 13:26:48.267335 1037 log.go:172] (0xc00060c0a0) (3) Data frame handling\nI0403 13:26:48.267360 1037 log.go:172] (0xc000116fd0) Data frame received for 5\nI0403 13:26:48.267371 1037 log.go:172] (0xc00010e000) (5) Data frame handling\nI0403 13:26:48.267389 1037 log.go:172] (0xc00010e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0403 13:26:48.267406 1037 log.go:172] (0xc000116fd0) Data frame received for 5\nI0403 13:26:48.267446 1037 log.go:172] (0xc00010e000) (5) Data frame handling\nI0403 13:26:48.268831 1037 log.go:172] (0xc000116fd0) Data frame received for 1\nI0403 13:26:48.268851 1037 log.go:172] (0xc00060c960) (1) Data frame handling\nI0403 13:26:48.268864 1037 log.go:172] (0xc00060c960) (1) Data frame sent\nI0403 13:26:48.268877 1037 log.go:172] (0xc000116fd0) (0xc00060c960) Stream removed, broadcasting: 1\nI0403 13:26:48.268893 1037 log.go:172] (0xc000116fd0) Go away received\nI0403 13:26:48.269566 1037 log.go:172] (0xc000116fd0) (0xc00060c960) Stream removed, broadcasting: 1\nI0403 13:26:48.269593 1037 log.go:172] (0xc000116fd0) (0xc00060c0a0) Stream removed, broadcasting: 3\nI0403 13:26:48.269605 1037 log.go:172] (0xc000116fd0) (0xc00010e000) Stream removed, broadcasting: 5\n" Apr 3 13:26:48.273: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 3 13:26:48.273: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 3 13:26:48.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4097 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 3 13:26:48.520: INFO: stderr: "I0403 13:26:48.412192 1059 log.go:172] (0xc000436630) (0xc00063a8c0) Create stream\nI0403 13:26:48.412251 1059 log.go:172] (0xc000436630) (0xc00063a8c0) Stream added, broadcasting: 1\nI0403 13:26:48.414990 1059 log.go:172] (0xc000436630) Reply frame received for 1\nI0403 13:26:48.415038 1059 log.go:172] (0xc000436630) (0xc00063a960) Create stream\nI0403 13:26:48.415049 1059 log.go:172] (0xc000436630) (0xc00063a960) Stream added, broadcasting: 3\nI0403 13:26:48.416162 1059 log.go:172] (0xc000436630) Reply frame received for 3\nI0403 13:26:48.416224 1059 log.go:172] (0xc000436630) (0xc0009fe000) Create stream\nI0403 13:26:48.416248 1059 log.go:172] (0xc000436630) (0xc0009fe000) Stream added, broadcasting: 5\nI0403 13:26:48.417498 1059 log.go:172] (0xc000436630) Reply frame received for 5\nI0403 13:26:48.480649 1059 log.go:172] (0xc000436630) Data frame received for 5\nI0403 13:26:48.480683 1059 log.go:172] (0xc0009fe000) (5) Data frame handling\nI0403 13:26:48.480706 1059 log.go:172] (0xc0009fe000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0403 13:26:48.514282 1059 log.go:172] (0xc000436630) Data frame received for 3\nI0403 13:26:48.514316 1059 log.go:172] (0xc00063a960) (3) Data frame handling\nI0403 13:26:48.514328 1059 log.go:172] (0xc00063a960) (3) Data frame sent\nI0403 13:26:48.514336 1059 log.go:172] (0xc000436630) Data frame received for 3\nI0403 13:26:48.514343 1059 log.go:172] (0xc00063a960) (3) Data frame handling\nI0403 13:26:48.514370 1059 log.go:172] (0xc000436630) Data frame received for 5\nI0403 13:26:48.514378 1059 log.go:172] (0xc0009fe000) (5) Data frame handling\nI0403 13:26:48.516198 1059 log.go:172] (0xc000436630) Data frame received for 1\nI0403 13:26:48.516228 1059 log.go:172] (0xc00063a8c0) (1) Data frame handling\nI0403 13:26:48.516241 1059 log.go:172] (0xc00063a8c0) (1) Data frame sent\nI0403 13:26:48.516257 1059 log.go:172] (0xc000436630) (0xc00063a8c0) Stream removed, broadcasting: 1\nI0403 13:26:48.516279 1059 log.go:172] (0xc000436630) Go away received\nI0403 13:26:48.516606 1059 log.go:172] (0xc000436630) (0xc00063a8c0) Stream removed, broadcasting: 1\nI0403 13:26:48.516621 1059 log.go:172] (0xc000436630) (0xc00063a960) Stream removed, broadcasting: 3\nI0403 13:26:48.516629 1059 log.go:172] (0xc000436630) (0xc0009fe000) Stream removed, broadcasting: 5\n" Apr 3 13:26:48.520: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 3 13:26:48.520: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 3 13:26:48.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 3 13:26:48.756: INFO: stderr: "I0403 13:26:48.640412 1080 log.go:172] (0xc00013ae70) (0xc0009c4780) Create stream\nI0403 13:26:48.640466 1080 log.go:172] (0xc00013ae70) (0xc0009c4780) Stream added, broadcasting: 1\nI0403 13:26:48.643538 1080 log.go:172] (0xc00013ae70) Reply frame received for 1\nI0403 13:26:48.643583 1080 log.go:172] (0xc00013ae70) (0xc00096c000) Create stream\nI0403 13:26:48.643606 1080 log.go:172] (0xc00013ae70) (0xc00096c000) Stream added, broadcasting: 3\nI0403 13:26:48.644666 1080 log.go:172] (0xc00013ae70) Reply frame received for 3\nI0403 13:26:48.644705 1080 log.go:172] (0xc00013ae70) (0xc0009c4820) Create stream\nI0403 13:26:48.644719 1080 log.go:172] (0xc00013ae70) (0xc0009c4820) Stream added, broadcasting: 5\nI0403 13:26:48.645863 1080 log.go:172] (0xc00013ae70) Reply frame received for 5\nI0403 13:26:48.709402 1080 log.go:172] (0xc00013ae70) Data frame received for 5\nI0403 13:26:48.709432 1080 log.go:172] (0xc0009c4820) (5) Data frame handling\nI0403 13:26:48.709455 1080 log.go:172] (0xc0009c4820) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0403 13:26:48.749687 1080 log.go:172] (0xc00013ae70) Data frame received for 3\nI0403 13:26:48.749726 1080 log.go:172] (0xc00096c000) (3) Data frame handling\nI0403 13:26:48.749760 1080 log.go:172] (0xc00096c000) (3) Data frame sent\nI0403 13:26:48.749812 1080 log.go:172] (0xc00013ae70) Data frame received for 5\nI0403 13:26:48.749864 1080 log.go:172] (0xc0009c4820) (5) Data frame handling\nI0403 13:26:48.749902 1080 log.go:172] (0xc00013ae70) Data frame received for 3\nI0403 13:26:48.749922 1080 log.go:172] (0xc00096c000) (3) Data frame handling\nI0403 13:26:48.751594 1080 log.go:172] (0xc00013ae70) Data frame received for 1\nI0403 13:26:48.751613 1080 log.go:172] (0xc0009c4780) (1) Data frame handling\nI0403 13:26:48.751623 1080 log.go:172] (0xc0009c4780) (1) Data frame sent\nI0403 13:26:48.751643 1080 log.go:172] (0xc00013ae70) (0xc0009c4780) Stream removed, broadcasting: 1\nI0403 13:26:48.751835 1080 log.go:172] (0xc00013ae70) Go away received\nI0403 13:26:48.752003 1080 log.go:172] (0xc00013ae70) (0xc0009c4780) Stream removed, broadcasting: 1\nI0403 13:26:48.752020 1080 log.go:172] (0xc00013ae70) (0xc00096c000) Stream removed, broadcasting: 3\nI0403 13:26:48.752029 1080 log.go:172] (0xc00013ae70) (0xc0009c4820) Stream removed, broadcasting: 5\n" Apr 3 13:26:48.756: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 3 13:26:48.756: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 3 13:26:48.756: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 13:26:48.760: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Apr 3 13:26:58.768: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 3 13:26:58.768: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 3 13:26:58.768: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 3 13:26:58.785: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.99999949s Apr 3 13:26:59.790: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.991063151s Apr 3 13:27:00.795: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.986006659s Apr 3 13:27:01.800: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.980435842s Apr 3 13:27:02.806: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975321376s Apr 3 13:27:03.815: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.970040857s Apr 3 13:27:04.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.960478339s Apr 3 13:27:05.842: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.955505621s Apr 3 13:27:06.847: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.933592081s Apr 3 13:27:07.853: INFO: Verifying statefulset ss doesn't scale past 3 for another 928.509017ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4097 Apr 3 13:27:08.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4097 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 13:27:09.093: INFO: stderr: "I0403 13:27:09.008439 1100 log.go:172] (0xc00013ee70) (0xc000812640) Create stream\nI0403 13:27:09.008504 1100 log.go:172] (0xc00013ee70) (0xc000812640) Stream added, broadcasting: 1\nI0403 13:27:09.011001 1100 log.go:172] (0xc00013ee70) Reply frame received for 1\nI0403 13:27:09.011059 1100 log.go:172] (0xc00013ee70) (0xc0008126e0) Create stream\nI0403 13:27:09.011075 1100 log.go:172] (0xc00013ee70) (0xc0008126e0) Stream added, broadcasting: 3\nI0403 13:27:09.012836 1100 log.go:172] (0xc00013ee70) Reply frame received for 3\nI0403 13:27:09.012891 1100 log.go:172] (0xc00013ee70) (0xc0006201e0) Create stream\nI0403 13:27:09.012908 1100 log.go:172] (0xc00013ee70) (0xc0006201e0) Stream added, broadcasting: 5\nI0403 13:27:09.013938 1100 log.go:172] (0xc00013ee70) Reply frame received for 5\nI0403 13:27:09.085305 1100 log.go:172] (0xc00013ee70) Data frame received for 3\nI0403 13:27:09.085371 1100 log.go:172] (0xc0008126e0) (3) Data frame handling\nI0403 13:27:09.085396 1100 log.go:172] (0xc0008126e0) (3) Data frame sent\nI0403 13:27:09.085418 1100 log.go:172] (0xc00013ee70) Data frame received for 3\nI0403 13:27:09.085435 1100 log.go:172] (0xc0008126e0) (3) Data frame handling\nI0403 13:27:09.085472 1100 log.go:172] (0xc00013ee70) Data frame received for 5\nI0403 13:27:09.085505 1100 log.go:172] (0xc0006201e0) (5) Data frame handling\nI0403 13:27:09.085530 1100 log.go:172] (0xc0006201e0) (5) Data frame sent\nI0403 13:27:09.085562 1100 log.go:172] (0xc00013ee70) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0403 13:27:09.085591 1100 log.go:172] (0xc0006201e0) (5) Data frame handling\nI0403 13:27:09.087597 1100 log.go:172] (0xc00013ee70) Data frame received for 1\nI0403 13:27:09.087636 1100 log.go:172] (0xc000812640) (1) Data frame handling\nI0403 13:27:09.087653 1100 log.go:172] (0xc000812640) (1) Data frame sent\nI0403 13:27:09.087671 1100 log.go:172] (0xc00013ee70) (0xc000812640) Stream removed, broadcasting: 1\nI0403 13:27:09.087893 1100 log.go:172] (0xc00013ee70) Go away received\nI0403 13:27:09.088052 1100 log.go:172] (0xc00013ee70) (0xc000812640) Stream removed, broadcasting: 1\nI0403 13:27:09.088072 1100 log.go:172] (0xc00013ee70) (0xc0008126e0) Stream removed, broadcasting: 3\nI0403 13:27:09.088083 1100 log.go:172] (0xc00013ee70) (0xc0006201e0) Stream removed, broadcasting: 5\n" Apr 3 13:27:09.093: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 3 13:27:09.093: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 3 13:27:09.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4097 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 13:27:09.306: INFO: stderr: "I0403 13:27:09.224263 1120 log.go:172] (0xc0009a2630) (0xc000686a00) Create stream\nI0403 13:27:09.224319 1120 log.go:172] (0xc0009a2630) (0xc000686a00) Stream added, broadcasting: 1\nI0403 13:27:09.227284 1120 log.go:172] (0xc0009a2630) Reply frame received for 1\nI0403 13:27:09.227338 1120 log.go:172] (0xc0009a2630) (0xc0009fe000) Create stream\nI0403 13:27:09.227366 1120 log.go:172] (0xc0009a2630) (0xc0009fe000) Stream added, broadcasting: 3\nI0403 13:27:09.228295 1120 log.go:172] (0xc0009a2630) Reply frame received for 3\nI0403 13:27:09.228337 1120 log.go:172] (0xc0009a2630) (0xc000686aa0) Create stream\nI0403 13:27:09.228349 1120 log.go:172] (0xc0009a2630) (0xc000686aa0) Stream added, broadcasting: 5\nI0403 13:27:09.229400 1120 log.go:172] (0xc0009a2630) Reply frame received for 5\nI0403 13:27:09.298426 1120 log.go:172] (0xc0009a2630) Data frame received for 5\nI0403 13:27:09.298466 1120 log.go:172] (0xc000686aa0) (5) Data frame handling\nI0403 13:27:09.298494 1120 log.go:172] (0xc000686aa0) (5) Data frame sent\nI0403 13:27:09.298506 1120 log.go:172] (0xc0009a2630) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0403 13:27:09.298514 1120 log.go:172] (0xc000686aa0) (5) Data frame handling\nI0403 13:27:09.298651 1120 log.go:172] (0xc0009a2630) Data frame received for 3\nI0403 13:27:09.299766 1120 log.go:172] (0xc0009fe000) (3) Data frame handling\nI0403 13:27:09.299792 1120 log.go:172] (0xc0009fe000) (3) Data frame sent\nI0403 13:27:09.299822 1120 log.go:172] (0xc0009a2630) Data frame received for 3\nI0403 13:27:09.299846 1120 log.go:172] (0xc0009fe000) (3) Data frame handling\nI0403 13:27:09.300881 1120 log.go:172] (0xc0009a2630) Data frame received for 1\nI0403 13:27:09.300922 1120 log.go:172] (0xc000686a00) (1) Data frame handling\nI0403 13:27:09.300943 1120 log.go:172] (0xc000686a00) (1) Data frame sent\nI0403 13:27:09.300979 1120 log.go:172] (0xc0009a2630) (0xc000686a00) Stream removed, broadcasting: 1\nI0403 13:27:09.301010 1120 log.go:172] (0xc0009a2630) Go away received\nI0403 13:27:09.301491 1120 log.go:172] (0xc0009a2630) (0xc000686a00) Stream removed, broadcasting: 1\nI0403 13:27:09.301518 1120 log.go:172] (0xc0009a2630) (0xc0009fe000) Stream removed, broadcasting: 3\nI0403 13:27:09.301530 1120 log.go:172] (0xc0009a2630) (0xc000686aa0) Stream removed, broadcasting: 5\n" Apr 3 13:27:09.306: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 3 13:27:09.306: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 3 13:27:09.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4097 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 13:27:09.512: INFO: stderr: "I0403 13:27:09.430349 1143 log.go:172] (0xc000a04580) (0xc000700b40) Create stream\nI0403 13:27:09.430410 1143 log.go:172] (0xc000a04580) (0xc000700b40) Stream added, broadcasting: 1\nI0403 13:27:09.432553 1143 log.go:172] (0xc000a04580) Reply frame received for 1\nI0403 13:27:09.432586 1143 log.go:172] (0xc000a04580) (0xc00086a000) Create stream\nI0403 13:27:09.432593 1143 log.go:172] (0xc000a04580) (0xc00086a000) Stream added, broadcasting: 3\nI0403 13:27:09.433684 1143 log.go:172] (0xc000a04580) Reply frame received for 3\nI0403 13:27:09.433732 1143 log.go:172] (0xc000a04580) (0xc0008f0000) Create stream\nI0403 13:27:09.433758 1143 log.go:172] (0xc000a04580) (0xc0008f0000) Stream added, broadcasting: 5\nI0403 13:27:09.434926 1143 log.go:172] (0xc000a04580) Reply frame received for 5\nI0403 13:27:09.504947 1143 log.go:172] (0xc000a04580) Data frame received for 3\nI0403 13:27:09.504990 1143 log.go:172] (0xc00086a000) (3) Data frame handling\nI0403 13:27:09.505009 1143 log.go:172] (0xc00086a000) (3) Data frame sent\nI0403 13:27:09.505027 1143 log.go:172] (0xc000a04580) Data frame received for 3\nI0403 13:27:09.505040 1143 log.go:172] (0xc00086a000) (3) Data frame handling\nI0403 13:27:09.505071 1143 log.go:172] (0xc000a04580) Data frame received for 5\nI0403 13:27:09.505079 1143 log.go:172] (0xc0008f0000) (5) Data frame handling\nI0403 13:27:09.505100 1143 log.go:172] (0xc0008f0000) (5) Data frame sent\nI0403 13:27:09.505234 1143 log.go:172] (0xc000a04580) Data frame received for 5\nI0403 13:27:09.505247 1143 log.go:172] (0xc0008f0000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0403 13:27:09.506737 1143 log.go:172] (0xc000a04580) Data frame received for 1\nI0403 13:27:09.506829 1143 log.go:172] (0xc000700b40) (1) Data frame handling\nI0403 13:27:09.506899 1143 log.go:172] (0xc000700b40) (1) Data frame sent\nI0403 13:27:09.506937 1143 log.go:172] (0xc000a04580) (0xc000700b40) Stream removed, broadcasting: 1\nI0403 13:27:09.506953 1143 log.go:172] (0xc000a04580) Go away received\nI0403 13:27:09.507374 1143 log.go:172] (0xc000a04580) (0xc000700b40) Stream removed, broadcasting: 1\nI0403 13:27:09.507400 1143 log.go:172] (0xc000a04580) (0xc00086a000) Stream removed, broadcasting: 3\nI0403 13:27:09.507413 1143 log.go:172] (0xc000a04580) (0xc0008f0000) Stream removed, broadcasting: 5\n" Apr 3 13:27:09.512: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 3 13:27:09.512: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 3 13:27:09.512: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 3 13:27:39.528: INFO: Deleting all statefulset in ns statefulset-4097 Apr 3 13:27:39.531: INFO: Scaling statefulset ss to 0 Apr 3 13:27:39.541: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 13:27:39.543: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:27:39.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4097" for this suite. Apr 3 13:27:45.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:27:45.674: INFO: namespace statefulset-4097 deletion completed in 6.110752145s • [SLOW TEST:100.721 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:27:45.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-83fdce8b-3580-4922-93d6-09634a0f1740 STEP: Creating a pod to test consume configMaps Apr 3 13:27:45.775: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f7394c21-8773-4d33-88c5-086428be0e63" in namespace "projected-2372" to be "success or failure" Apr 3 13:27:45.779: INFO: Pod "pod-projected-configmaps-f7394c21-8773-4d33-88c5-086428be0e63": Phase="Pending", Reason="", readiness=false. Elapsed: 3.822551ms Apr 3 13:27:47.810: INFO: Pod "pod-projected-configmaps-f7394c21-8773-4d33-88c5-086428be0e63": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034113057s Apr 3 13:27:49.813: INFO: Pod "pod-projected-configmaps-f7394c21-8773-4d33-88c5-086428be0e63": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03770219s STEP: Saw pod success Apr 3 13:27:49.813: INFO: Pod "pod-projected-configmaps-f7394c21-8773-4d33-88c5-086428be0e63" satisfied condition "success or failure" Apr 3 13:27:49.816: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-f7394c21-8773-4d33-88c5-086428be0e63 container projected-configmap-volume-test: STEP: delete the pod Apr 3 13:27:49.846: INFO: Waiting for pod pod-projected-configmaps-f7394c21-8773-4d33-88c5-086428be0e63 to disappear Apr 3 13:27:49.895: INFO: Pod pod-projected-configmaps-f7394c21-8773-4d33-88c5-086428be0e63 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:27:49.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2372" for this suite. Apr 3 13:27:55.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:27:55.993: INFO: namespace projected-2372 deletion completed in 6.093886918s • [SLOW TEST:10.319 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:27:55.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 3 13:27:56.039: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:28:02.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7215" for this suite. Apr 3 13:28:08.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:28:08.150: INFO: namespace init-container-7215 deletion completed in 6.082859697s • [SLOW TEST:12.156 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:28:08.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 13:28:08.221: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c66adac8-9dec-4e23-8e45-b6010b42d288" in namespace "downward-api-2473" to be "success or failure" Apr 3 13:28:08.229: INFO: Pod "downwardapi-volume-c66adac8-9dec-4e23-8e45-b6010b42d288": Phase="Pending", Reason="", readiness=false. Elapsed: 7.844091ms Apr 3 13:28:10.232: INFO: Pod "downwardapi-volume-c66adac8-9dec-4e23-8e45-b6010b42d288": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011198068s Apr 3 13:28:12.237: INFO: Pod "downwardapi-volume-c66adac8-9dec-4e23-8e45-b6010b42d288": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015846365s STEP: Saw pod success Apr 3 13:28:12.237: INFO: Pod "downwardapi-volume-c66adac8-9dec-4e23-8e45-b6010b42d288" satisfied condition "success or failure" Apr 3 13:28:12.240: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-c66adac8-9dec-4e23-8e45-b6010b42d288 container client-container: STEP: delete the pod Apr 3 13:28:12.260: INFO: Waiting for pod downwardapi-volume-c66adac8-9dec-4e23-8e45-b6010b42d288 to disappear Apr 3 13:28:12.264: INFO: Pod downwardapi-volume-c66adac8-9dec-4e23-8e45-b6010b42d288 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:28:12.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2473" for this suite. Apr 3 13:28:18.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:28:18.362: INFO: namespace downward-api-2473 deletion completed in 6.093536856s • [SLOW TEST:10.212 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:28:18.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 13:28:18.417: INFO: Waiting up to 5m0s for pod "downwardapi-volume-01c26fba-8b88-4ab8-9e1a-ec8a27437ade" in namespace "projected-6723" to be "success or failure" Apr 3 13:28:18.420: INFO: Pod "downwardapi-volume-01c26fba-8b88-4ab8-9e1a-ec8a27437ade": Phase="Pending", Reason="", readiness=false. Elapsed: 3.734659ms Apr 3 13:28:20.424: INFO: Pod "downwardapi-volume-01c26fba-8b88-4ab8-9e1a-ec8a27437ade": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00746068s Apr 3 13:28:22.429: INFO: Pod "downwardapi-volume-01c26fba-8b88-4ab8-9e1a-ec8a27437ade": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012023144s STEP: Saw pod success Apr 3 13:28:22.429: INFO: Pod "downwardapi-volume-01c26fba-8b88-4ab8-9e1a-ec8a27437ade" satisfied condition "success or failure" Apr 3 13:28:22.432: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-01c26fba-8b88-4ab8-9e1a-ec8a27437ade container client-container: STEP: delete the pod Apr 3 13:28:22.452: INFO: Waiting for pod downwardapi-volume-01c26fba-8b88-4ab8-9e1a-ec8a27437ade to disappear Apr 3 13:28:22.456: INFO: Pod downwardapi-volume-01c26fba-8b88-4ab8-9e1a-ec8a27437ade no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:28:22.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6723" for this suite. Apr 3 13:28:28.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:28:28.551: INFO: namespace projected-6723 deletion completed in 6.09164421s • [SLOW TEST:10.189 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:28:28.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 13:28:28.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3100' Apr 3 13:28:28.935: INFO: stderr: "" Apr 3 13:28:28.935: INFO: stdout: "replicationcontroller/redis-master created\n" Apr 3 13:28:28.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3100' Apr 3 13:28:29.226: INFO: stderr: "" Apr 3 13:28:29.226: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Apr 3 13:28:30.230: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:28:30.230: INFO: Found 0 / 1 Apr 3 13:28:31.230: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:28:31.230: INFO: Found 0 / 1 Apr 3 13:28:32.230: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:28:32.230: INFO: Found 0 / 1 Apr 3 13:28:33.230: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:28:33.230: INFO: Found 1 / 1 Apr 3 13:28:33.230: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 3 13:28:33.233: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:28:33.233: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 3 13:28:33.233: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-fq4rn --namespace=kubectl-3100' Apr 3 13:28:33.343: INFO: stderr: "" Apr 3 13:28:33.343: INFO: stdout: "Name: redis-master-fq4rn\nNamespace: kubectl-3100\nPriority: 0\nNode: iruya-worker/172.17.0.6\nStart Time: Fri, 03 Apr 2020 13:28:29 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.2.96\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://202b2fc322e0fbf314b43a69d1fc5b9a70e49cfa684884ad2b174ea88a1aaebb\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Fri, 03 Apr 2020 13:28:31 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-lpgln (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-lpgln:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-lpgln\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 5s default-scheduler Successfully assigned kubectl-3100/redis-master-fq4rn to iruya-worker\n Normal Pulled 3s kubelet, iruya-worker Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-worker Created container redis-master\n Normal Started 2s kubelet, iruya-worker Started container redis-master\n" Apr 3 13:28:33.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-3100' Apr 3 13:28:33.479: INFO: stderr: "" Apr 3 13:28:33.479: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3100\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-fq4rn\n" Apr 3 13:28:33.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-3100' Apr 3 13:28:33.613: INFO: stderr: "" Apr 3 13:28:33.613: INFO: stdout: "Name: redis-master\nNamespace: kubectl-3100\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.109.125.219\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.2.96:6379\nSession Affinity: None\nEvents: \n" Apr 3 13:28:33.618: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' Apr 3 13:28:33.737: INFO: stderr: "" Apr 3 13:28:33.737: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Fri, 03 Apr 2020 13:27:43 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Fri, 03 Apr 2020 13:27:43 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Fri, 03 Apr 2020 13:27:43 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Fri, 03 Apr 2020 13:27:43 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 18d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 18d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 18d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 18d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 18d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Apr 3 13:28:33.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3100' Apr 3 13:28:33.847: INFO: stderr: "" Apr 3 13:28:33.847: INFO: stdout: "Name: kubectl-3100\nLabels: e2e-framework=kubectl\n e2e-run=c7dabd96-ad39-4dda-bda7-5cccc1631f6b\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:28:33.847: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3100" for this suite. Apr 3 13:28:55.861: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:28:55.936: INFO: namespace kubectl-3100 deletion completed in 22.085833833s • [SLOW TEST:27.385 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:28:55.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy Apr 3 13:28:55.997: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix701926643/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:28:56.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9085" for this suite. Apr 3 13:29:02.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:29:02.190: INFO: namespace kubectl-9085 deletion completed in 6.11655077s • [SLOW TEST:6.254 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:29:02.190: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments Apr 3 13:29:02.268: INFO: Waiting up to 5m0s for pod "client-containers-02b038b5-337c-4212-bc3b-9e3fa74c9641" in namespace "containers-7614" to be "success or failure" Apr 3 13:29:02.270: INFO: Pod "client-containers-02b038b5-337c-4212-bc3b-9e3fa74c9641": Phase="Pending", Reason="", readiness=false. Elapsed: 2.595283ms Apr 3 13:29:04.274: INFO: Pod "client-containers-02b038b5-337c-4212-bc3b-9e3fa74c9641": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006400953s Apr 3 13:29:06.279: INFO: Pod "client-containers-02b038b5-337c-4212-bc3b-9e3fa74c9641": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011177145s STEP: Saw pod success Apr 3 13:29:06.279: INFO: Pod "client-containers-02b038b5-337c-4212-bc3b-9e3fa74c9641" satisfied condition "success or failure" Apr 3 13:29:06.283: INFO: Trying to get logs from node iruya-worker2 pod client-containers-02b038b5-337c-4212-bc3b-9e3fa74c9641 container test-container: STEP: delete the pod Apr 3 13:29:06.303: INFO: Waiting for pod client-containers-02b038b5-337c-4212-bc3b-9e3fa74c9641 to disappear Apr 3 13:29:06.307: INFO: Pod client-containers-02b038b5-337c-4212-bc3b-9e3fa74c9641 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:29:06.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7614" for this suite. Apr 3 13:29:12.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:29:12.439: INFO: namespace containers-7614 deletion completed in 6.128530639s • [SLOW TEST:10.248 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:29:12.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change Apr 3 13:29:12.535: INFO: Pod name pod-release: Found 0 pods out of 1 Apr 3 13:29:17.541: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:29:17.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5818" for this suite. Apr 3 13:29:23.640: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:29:23.713: INFO: namespace replication-controller-5818 deletion completed in 6.134047026s • [SLOW TEST:11.274 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:29:23.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:29:27.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-843" for this suite. Apr 3 13:30:15.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:30:15.874: INFO: namespace kubelet-test-843 deletion completed in 48.087394687s • [SLOW TEST:52.161 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:30:15.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 13:30:15.986: INFO: Pod name rollover-pod: Found 0 pods out of 1 Apr 3 13:30:20.991: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 3 13:30:20.991: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Apr 3 13:30:22.995: INFO: Creating deployment "test-rollover-deployment" Apr 3 13:30:23.004: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Apr 3 13:30:25.011: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Apr 3 13:30:25.018: INFO: Ensure that both replica sets have 1 created replica Apr 3 13:30:25.024: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Apr 3 13:30:25.031: INFO: Updating deployment test-rollover-deployment Apr 3 13:30:25.031: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Apr 3 13:30:27.066: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Apr 3 13:30:27.072: INFO: Make sure deployment "test-rollover-deployment" is complete Apr 3 13:30:27.077: INFO: all replica sets need to contain the pod-template-hash label Apr 3 13:30:27.078: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517425, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 13:30:29.086: INFO: all replica sets need to contain the pod-template-hash label Apr 3 13:30:29.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517428, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 13:30:31.086: INFO: all replica sets need to contain the pod-template-hash label Apr 3 13:30:31.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517428, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 13:30:33.086: INFO: all replica sets need to contain the pod-template-hash label Apr 3 13:30:33.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517428, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 13:30:35.086: INFO: all replica sets need to contain the pod-template-hash label Apr 3 13:30:35.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517428, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 13:30:37.086: INFO: all replica sets need to contain the pod-template-hash label Apr 3 13:30:37.086: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517428, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517423, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 13:30:39.086: INFO: Apr 3 13:30:39.086: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 3 13:30:39.097: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3707,SelfLink:/apis/apps/v1/namespaces/deployment-3707/deployments/test-rollover-deployment,UID:28d595fe-e419-4ea2-bb5d-45369ebcf029,ResourceVersion:3397254,Generation:2,CreationTimestamp:2020-04-03 13:30:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-03 13:30:23 +0000 UTC 2020-04-03 13:30:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-03 13:30:38 +0000 UTC 2020-04-03 13:30:23 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 3 13:30:39.101: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3707,SelfLink:/apis/apps/v1/namespaces/deployment-3707/replicasets/test-rollover-deployment-854595fc44,UID:f070c460-ea05-486b-9db5-96b6e7156f11,ResourceVersion:3397241,Generation:2,CreationTimestamp:2020-04-03 13:30:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 28d595fe-e419-4ea2-bb5d-45369ebcf029 0xc002b3d3b7 0xc002b3d3b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 3 13:30:39.101: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Apr 3 13:30:39.101: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3707,SelfLink:/apis/apps/v1/namespaces/deployment-3707/replicasets/test-rollover-controller,UID:179c74f4-7afe-4fe6-982e-9b90e67bfd0d,ResourceVersion:3397252,Generation:2,CreationTimestamp:2020-04-03 13:30:15 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 28d595fe-e419-4ea2-bb5d-45369ebcf029 0xc002b3d2d7 0xc002b3d2d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 3 13:30:39.102: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3707,SelfLink:/apis/apps/v1/namespaces/deployment-3707/replicasets/test-rollover-deployment-9b8b997cf,UID:5a8d0578-6308-432a-acf3-6ed4f9a8e610,ResourceVersion:3397204,Generation:2,CreationTimestamp:2020-04-03 13:30:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 28d595fe-e419-4ea2-bb5d-45369ebcf029 0xc002b3d480 0xc002b3d481}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 3 13:30:39.105: INFO: Pod "test-rollover-deployment-854595fc44-9w5vc" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-9w5vc,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3707,SelfLink:/api/v1/namespaces/deployment-3707/pods/test-rollover-deployment-854595fc44-9w5vc,UID:1929ba9b-d43e-44f1-8a5d-6d80695651d5,ResourceVersion:3397219,Generation:0,CreationTimestamp:2020-04-03 13:30:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 f070c460-ea05-486b-9db5-96b6e7156f11 0xc001b9a087 0xc001b9a088}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hvpgf {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hvpgf,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-hvpgf true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc001b9a100} {node.kubernetes.io/unreachable Exists NoExecute 0xc001b9a120}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:30:25 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:30:28 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:30:28 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:30:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.16,StartTime:2020-04-03 13:30:25 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-03 13:30:27 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://2d5acda1fda43683cdd547e7bded2485350d3b3172c62a742f25293d5baef737}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:30:39.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3707" for this suite. Apr 3 13:30:45.207: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:30:45.287: INFO: namespace deployment-3707 deletion completed in 6.176766882s • [SLOW TEST:29.412 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:30:45.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 13:30:45.319: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) Apr 3 13:30:45.346: INFO: Pod name sample-pod: Found 0 pods out of 1 Apr 3 13:30:50.350: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Apr 3 13:30:50.350: INFO: Creating deployment "test-rolling-update-deployment" Apr 3 13:30:50.355: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has Apr 3 13:30:50.366: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created Apr 3 13:30:52.374: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected Apr 3 13:30:52.377: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517450, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517450, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517450, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721517450, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 13:30:54.382: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 3 13:30:54.392: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-1720,SelfLink:/apis/apps/v1/namespaces/deployment-1720/deployments/test-rolling-update-deployment,UID:39259c32-90b1-4f8c-a170-a9bbd547861b,ResourceVersion:3397364,Generation:1,CreationTimestamp:2020-04-03 13:30:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-04-03 13:30:50 +0000 UTC 2020-04-03 13:30:50 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-04-03 13:30:53 +0000 UTC 2020-04-03 13:30:50 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Apr 3 13:30:54.395: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-1720,SelfLink:/apis/apps/v1/namespaces/deployment-1720/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:29add3aa-e127-42b9-9c99-00cdc3152a1e,ResourceVersion:3397353,Generation:1,CreationTimestamp:2020-04-03 13:30:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 39259c32-90b1-4f8c-a170-a9bbd547861b 0xc002cabfc7 0xc002cabfc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Apr 3 13:30:54.395: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": Apr 3 13:30:54.395: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-1720,SelfLink:/apis/apps/v1/namespaces/deployment-1720/replicasets/test-rolling-update-controller,UID:8e88e2a6-3c57-41d0-97e6-86750c32bac2,ResourceVersion:3397362,Generation:2,CreationTimestamp:2020-04-03 13:30:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 39259c32-90b1-4f8c-a170-a9bbd547861b 0xc002cabef7 0xc002cabef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 3 13:30:54.398: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-trc7b" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-trc7b,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-1720,SelfLink:/api/v1/namespaces/deployment-1720/pods/test-rolling-update-deployment-79f6b9d75c-trc7b,UID:77edb7c5-c6f5-4b92-a9ca-8ed64a5099e9,ResourceVersion:3397352,Generation:0,CreationTimestamp:2020-04-03 13:30:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 29add3aa-e127-42b9-9c99-00cdc3152a1e 0xc000448717 0xc000448718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-v7rvb {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-v7rvb,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-v7rvb true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000448790} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004487b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:30:50 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:30:53 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:30:53 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 13:30:50 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.101,StartTime:2020-04-03 13:30:50 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-04-03 13:30:52 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://4b4576b645eac1e62c9bfcf95d6e516fab18fbb2ccf9b45523f2305009c7254c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:30:54.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-1720" for this suite. Apr 3 13:31:00.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:31:00.488: INFO: namespace deployment-1720 deletion completed in 6.085928545s • [SLOW TEST:15.201 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:31:00.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-52db52c3-44ee-4268-98b8-cdddd1d4d084 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-52db52c3-44ee-4268-98b8-cdddd1d4d084 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:32:10.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1014" for this suite. Apr 3 13:32:32.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:32:33.001: INFO: namespace configmap-1014 deletion completed in 22.107574423s • [SLOW TEST:92.513 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:32:33.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-7xvg STEP: Creating a pod to test atomic-volume-subpath Apr 3 13:32:33.092: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7xvg" in namespace "subpath-2413" to be "success or failure" Apr 3 13:32:33.110: INFO: Pod "pod-subpath-test-configmap-7xvg": Phase="Pending", Reason="", readiness=false. Elapsed: 17.575998ms Apr 3 13:32:35.113: INFO: Pod "pod-subpath-test-configmap-7xvg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020867279s Apr 3 13:32:37.117: INFO: Pod "pod-subpath-test-configmap-7xvg": Phase="Running", Reason="", readiness=true. Elapsed: 4.024986085s Apr 3 13:32:39.121: INFO: Pod "pod-subpath-test-configmap-7xvg": Phase="Running", Reason="", readiness=true. Elapsed: 6.029337943s Apr 3 13:32:41.126: INFO: Pod "pod-subpath-test-configmap-7xvg": Phase="Running", Reason="", readiness=true. Elapsed: 8.033784637s Apr 3 13:32:43.130: INFO: Pod "pod-subpath-test-configmap-7xvg": Phase="Running", Reason="", readiness=true. Elapsed: 10.037915035s Apr 3 13:32:45.134: INFO: Pod "pod-subpath-test-configmap-7xvg": Phase="Running", Reason="", readiness=true. Elapsed: 12.04217151s Apr 3 13:32:47.139: INFO: Pod "pod-subpath-test-configmap-7xvg": Phase="Running", Reason="", readiness=true. Elapsed: 14.046536171s Apr 3 13:32:49.143: INFO: Pod "pod-subpath-test-configmap-7xvg": Phase="Running", Reason="", readiness=true. Elapsed: 16.050840569s Apr 3 13:32:51.147: INFO: Pod "pod-subpath-test-configmap-7xvg": Phase="Running", Reason="", readiness=true. Elapsed: 18.054919155s Apr 3 13:32:53.151: INFO: Pod "pod-subpath-test-configmap-7xvg": Phase="Running", Reason="", readiness=true. Elapsed: 20.058730306s Apr 3 13:32:55.155: INFO: Pod "pod-subpath-test-configmap-7xvg": Phase="Running", Reason="", readiness=true. Elapsed: 22.0630408s Apr 3 13:32:57.176: INFO: Pod "pod-subpath-test-configmap-7xvg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.083542117s STEP: Saw pod success Apr 3 13:32:57.176: INFO: Pod "pod-subpath-test-configmap-7xvg" satisfied condition "success or failure" Apr 3 13:32:57.179: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-7xvg container test-container-subpath-configmap-7xvg: STEP: delete the pod Apr 3 13:32:57.200: INFO: Waiting for pod pod-subpath-test-configmap-7xvg to disappear Apr 3 13:32:57.204: INFO: Pod pod-subpath-test-configmap-7xvg no longer exists STEP: Deleting pod pod-subpath-test-configmap-7xvg Apr 3 13:32:57.204: INFO: Deleting pod "pod-subpath-test-configmap-7xvg" in namespace "subpath-2413" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:32:57.207: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2413" for this suite. Apr 3 13:33:03.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:33:03.300: INFO: namespace subpath-2413 deletion completed in 6.089183963s • [SLOW TEST:30.298 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:33:03.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-9f6cdb8e-5b1e-4004-b30f-d19d0fbc8215 STEP: Creating a pod to test consume secrets Apr 3 13:33:03.357: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-123c8b40-72f3-4c58-b316-baaaaab74fad" in namespace "projected-3046" to be "success or failure" Apr 3 13:33:03.360: INFO: Pod "pod-projected-secrets-123c8b40-72f3-4c58-b316-baaaaab74fad": Phase="Pending", Reason="", readiness=false. Elapsed: 3.517675ms Apr 3 13:33:05.367: INFO: Pod "pod-projected-secrets-123c8b40-72f3-4c58-b316-baaaaab74fad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009701004s Apr 3 13:33:07.373: INFO: Pod "pod-projected-secrets-123c8b40-72f3-4c58-b316-baaaaab74fad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015734874s STEP: Saw pod success Apr 3 13:33:07.373: INFO: Pod "pod-projected-secrets-123c8b40-72f3-4c58-b316-baaaaab74fad" satisfied condition "success or failure" Apr 3 13:33:07.375: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-123c8b40-72f3-4c58-b316-baaaaab74fad container projected-secret-volume-test: STEP: delete the pod Apr 3 13:33:07.422: INFO: Waiting for pod pod-projected-secrets-123c8b40-72f3-4c58-b316-baaaaab74fad to disappear Apr 3 13:33:07.426: INFO: Pod pod-projected-secrets-123c8b40-72f3-4c58-b316-baaaaab74fad no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:33:07.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3046" for this suite. Apr 3 13:33:13.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:33:13.519: INFO: namespace projected-3046 deletion completed in 6.086686202s • [SLOW TEST:10.218 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:33:13.520: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-98359d56-28e5-4520-b82b-fed241fa01ed [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:33:13.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3194" for this suite. Apr 3 13:33:19.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:33:19.679: INFO: namespace secrets-3194 deletion completed in 6.086166989s • [SLOW TEST:6.160 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:33:19.680: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 3 13:33:19.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-7925' Apr 3 13:33:19.820: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 3 13:33:19.820: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 Apr 3 13:33:21.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7925' Apr 3 13:33:21.967: INFO: stderr: "" Apr 3 13:33:21.967: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:33:21.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7925" for this suite. Apr 3 13:34:44.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:34:44.112: INFO: namespace kubectl-7925 deletion completed in 1m22.141223006s • [SLOW TEST:84.432 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:34:44.113: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 3 13:34:44.172: INFO: Waiting up to 5m0s for pod "downward-api-88e865e9-6102-4fce-b18a-0a3ea9947fb9" in namespace "downward-api-8418" to be "success or failure" Apr 3 13:34:44.187: INFO: Pod "downward-api-88e865e9-6102-4fce-b18a-0a3ea9947fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 14.878754ms Apr 3 13:34:46.190: INFO: Pod "downward-api-88e865e9-6102-4fce-b18a-0a3ea9947fb9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018852175s Apr 3 13:34:48.195: INFO: Pod "downward-api-88e865e9-6102-4fce-b18a-0a3ea9947fb9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023398901s STEP: Saw pod success Apr 3 13:34:48.195: INFO: Pod "downward-api-88e865e9-6102-4fce-b18a-0a3ea9947fb9" satisfied condition "success or failure" Apr 3 13:34:48.199: INFO: Trying to get logs from node iruya-worker pod downward-api-88e865e9-6102-4fce-b18a-0a3ea9947fb9 container dapi-container: STEP: delete the pod Apr 3 13:34:48.262: INFO: Waiting for pod downward-api-88e865e9-6102-4fce-b18a-0a3ea9947fb9 to disappear Apr 3 13:34:48.266: INFO: Pod downward-api-88e865e9-6102-4fce-b18a-0a3ea9947fb9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:34:48.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8418" for this suite. Apr 3 13:34:54.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:34:54.364: INFO: namespace downward-api-8418 deletion completed in 6.094164117s • [SLOW TEST:10.251 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:34:54.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-8162 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8162 to expose endpoints map[] Apr 3 13:34:54.493: INFO: Get endpoints failed (21.361786ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found Apr 3 13:34:55.498: INFO: successfully validated that service endpoint-test2 in namespace services-8162 exposes endpoints map[] (1.025748188s elapsed) STEP: Creating pod pod1 in namespace services-8162 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8162 to expose endpoints map[pod1:[80]] Apr 3 13:34:58.590: INFO: successfully validated that service endpoint-test2 in namespace services-8162 exposes endpoints map[pod1:[80]] (3.084629877s elapsed) STEP: Creating pod pod2 in namespace services-8162 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8162 to expose endpoints map[pod1:[80] pod2:[80]] Apr 3 13:35:02.729: INFO: successfully validated that service endpoint-test2 in namespace services-8162 exposes endpoints map[pod1:[80] pod2:[80]] (4.134852343s elapsed) STEP: Deleting pod pod1 in namespace services-8162 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8162 to expose endpoints map[pod2:[80]] Apr 3 13:35:03.777: INFO: successfully validated that service endpoint-test2 in namespace services-8162 exposes endpoints map[pod2:[80]] (1.043707826s elapsed) STEP: Deleting pod pod2 in namespace services-8162 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-8162 to expose endpoints map[] Apr 3 13:35:04.796: INFO: successfully validated that service endpoint-test2 in namespace services-8162 exposes endpoints map[] (1.014379465s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:35:04.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8162" for this suite. Apr 3 13:35:26.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:35:26.921: INFO: namespace services-8162 deletion completed in 22.092187002s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.557 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:35:26.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 13:35:27.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4d8370bb-ffcc-417b-80cb-7b0b0ebc8c1c" in namespace "projected-3701" to be "success or failure" Apr 3 13:35:27.010: INFO: Pod "downwardapi-volume-4d8370bb-ffcc-417b-80cb-7b0b0ebc8c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.510295ms Apr 3 13:35:29.014: INFO: Pod "downwardapi-volume-4d8370bb-ffcc-417b-80cb-7b0b0ebc8c1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00793014s Apr 3 13:35:31.019: INFO: Pod "downwardapi-volume-4d8370bb-ffcc-417b-80cb-7b0b0ebc8c1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012324966s STEP: Saw pod success Apr 3 13:35:31.019: INFO: Pod "downwardapi-volume-4d8370bb-ffcc-417b-80cb-7b0b0ebc8c1c" satisfied condition "success or failure" Apr 3 13:35:31.022: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4d8370bb-ffcc-417b-80cb-7b0b0ebc8c1c container client-container: STEP: delete the pod Apr 3 13:35:31.053: INFO: Waiting for pod downwardapi-volume-4d8370bb-ffcc-417b-80cb-7b0b0ebc8c1c to disappear Apr 3 13:35:31.064: INFO: Pod downwardapi-volume-4d8370bb-ffcc-417b-80cb-7b0b0ebc8c1c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:35:31.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3701" for this suite. Apr 3 13:35:37.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:35:37.150: INFO: namespace projected-3701 deletion completed in 6.082663663s • [SLOW TEST:10.228 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:35:37.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-3376 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 3 13:35:37.230: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 3 13:35:59.344: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.21 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3376 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 13:35:59.344: INFO: >>> kubeConfig: /root/.kube/config I0403 13:35:59.376445 6 log.go:172] (0xc000657c30) (0xc001bac6e0) Create stream I0403 13:35:59.376478 6 log.go:172] (0xc000657c30) (0xc001bac6e0) Stream added, broadcasting: 1 I0403 13:35:59.379765 6 log.go:172] (0xc000657c30) Reply frame received for 1 I0403 13:35:59.379813 6 log.go:172] (0xc000657c30) (0xc000574820) Create stream I0403 13:35:59.379828 6 log.go:172] (0xc000657c30) (0xc000574820) Stream added, broadcasting: 3 I0403 13:35:59.380821 6 log.go:172] (0xc000657c30) Reply frame received for 3 I0403 13:35:59.380866 6 log.go:172] (0xc000657c30) (0xc0012c4000) Create stream I0403 13:35:59.380884 6 log.go:172] (0xc000657c30) (0xc0012c4000) Stream added, broadcasting: 5 I0403 13:35:59.381957 6 log.go:172] (0xc000657c30) Reply frame received for 5 I0403 13:36:00.448138 6 log.go:172] (0xc000657c30) Data frame received for 3 I0403 13:36:00.448171 6 log.go:172] (0xc000574820) (3) Data frame handling I0403 13:36:00.448182 6 log.go:172] (0xc000574820) (3) Data frame sent I0403 13:36:00.448477 6 log.go:172] (0xc000657c30) Data frame received for 5 I0403 13:36:00.448491 6 log.go:172] (0xc0012c4000) (5) Data frame handling I0403 13:36:00.448684 6 log.go:172] (0xc000657c30) Data frame received for 3 I0403 13:36:00.448699 6 log.go:172] (0xc000574820) (3) Data frame handling I0403 13:36:00.450354 6 log.go:172] (0xc000657c30) Data frame received for 1 I0403 13:36:00.450372 6 log.go:172] (0xc001bac6e0) (1) Data frame handling I0403 13:36:00.450382 6 log.go:172] (0xc001bac6e0) (1) Data frame sent I0403 13:36:00.450393 6 log.go:172] (0xc000657c30) (0xc001bac6e0) Stream removed, broadcasting: 1 I0403 13:36:00.450501 6 log.go:172] (0xc000657c30) (0xc001bac6e0) Stream removed, broadcasting: 1 I0403 13:36:00.450520 6 log.go:172] (0xc000657c30) (0xc000574820) Stream removed, broadcasting: 3 I0403 13:36:00.450586 6 log.go:172] (0xc000657c30) Go away received I0403 13:36:00.450621 6 log.go:172] (0xc000657c30) (0xc0012c4000) Stream removed, broadcasting: 5 Apr 3 13:36:00.450: INFO: Found all expected endpoints: [netserver-0] Apr 3 13:36:00.455: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.106 8081 | grep -v '^\s*$'] Namespace:pod-network-test-3376 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 13:36:00.455: INFO: >>> kubeConfig: /root/.kube/config I0403 13:36:00.484895 6 log.go:172] (0xc000a3e630) (0xc0012c4280) Create stream I0403 13:36:00.484924 6 log.go:172] (0xc000a3e630) (0xc0012c4280) Stream added, broadcasting: 1 I0403 13:36:00.486968 6 log.go:172] (0xc000a3e630) Reply frame received for 1 I0403 13:36:00.487009 6 log.go:172] (0xc000a3e630) (0xc001bac780) Create stream I0403 13:36:00.487018 6 log.go:172] (0xc000a3e630) (0xc001bac780) Stream added, broadcasting: 3 I0403 13:36:00.487804 6 log.go:172] (0xc000a3e630) Reply frame received for 3 I0403 13:36:00.487825 6 log.go:172] (0xc000a3e630) (0xc000574d20) Create stream I0403 13:36:00.487838 6 log.go:172] (0xc000a3e630) (0xc000574d20) Stream added, broadcasting: 5 I0403 13:36:00.488469 6 log.go:172] (0xc000a3e630) Reply frame received for 5 I0403 13:36:01.547427 6 log.go:172] (0xc000a3e630) Data frame received for 3 I0403 13:36:01.547474 6 log.go:172] (0xc001bac780) (3) Data frame handling I0403 13:36:01.547519 6 log.go:172] (0xc001bac780) (3) Data frame sent I0403 13:36:01.547729 6 log.go:172] (0xc000a3e630) Data frame received for 3 I0403 13:36:01.547764 6 log.go:172] (0xc001bac780) (3) Data frame handling I0403 13:36:01.547783 6 log.go:172] (0xc000a3e630) Data frame received for 5 I0403 13:36:01.547804 6 log.go:172] (0xc000574d20) (5) Data frame handling I0403 13:36:01.550009 6 log.go:172] (0xc000a3e630) Data frame received for 1 I0403 13:36:01.550051 6 log.go:172] (0xc0012c4280) (1) Data frame handling I0403 13:36:01.550083 6 log.go:172] (0xc0012c4280) (1) Data frame sent I0403 13:36:01.550114 6 log.go:172] (0xc000a3e630) (0xc0012c4280) Stream removed, broadcasting: 1 I0403 13:36:01.550149 6 log.go:172] (0xc000a3e630) Go away received I0403 13:36:01.550335 6 log.go:172] (0xc000a3e630) (0xc0012c4280) Stream removed, broadcasting: 1 I0403 13:36:01.550372 6 log.go:172] (0xc000a3e630) (0xc001bac780) Stream removed, broadcasting: 3 I0403 13:36:01.550385 6 log.go:172] (0xc000a3e630) (0xc000574d20) Stream removed, broadcasting: 5 Apr 3 13:36:01.550: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:36:01.550: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3376" for this suite. Apr 3 13:36:23.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:36:23.660: INFO: namespace pod-network-test-3376 deletion completed in 22.10527909s • [SLOW TEST:46.510 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:36:23.661: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium Apr 3 13:36:23.723: INFO: Waiting up to 5m0s for pod "pod-16669774-47d0-493b-8b86-87f2e4cbad1a" in namespace "emptydir-6189" to be "success or failure" Apr 3 13:36:23.760: INFO: Pod "pod-16669774-47d0-493b-8b86-87f2e4cbad1a": Phase="Pending", Reason="", readiness=false. Elapsed: 37.267874ms Apr 3 13:36:25.764: INFO: Pod "pod-16669774-47d0-493b-8b86-87f2e4cbad1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041379425s Apr 3 13:36:27.768: INFO: Pod "pod-16669774-47d0-493b-8b86-87f2e4cbad1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045489702s STEP: Saw pod success Apr 3 13:36:27.768: INFO: Pod "pod-16669774-47d0-493b-8b86-87f2e4cbad1a" satisfied condition "success or failure" Apr 3 13:36:27.771: INFO: Trying to get logs from node iruya-worker pod pod-16669774-47d0-493b-8b86-87f2e4cbad1a container test-container: STEP: delete the pod Apr 3 13:36:27.805: INFO: Waiting for pod pod-16669774-47d0-493b-8b86-87f2e4cbad1a to disappear Apr 3 13:36:27.818: INFO: Pod pod-16669774-47d0-493b-8b86-87f2e4cbad1a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:36:27.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6189" for this suite. Apr 3 13:36:33.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:36:33.920: INFO: namespace emptydir-6189 deletion completed in 6.099309182s • [SLOW TEST:10.260 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:36:33.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 3 13:36:38.540: INFO: Successfully updated pod "labelsupdatee383b345-1e62-4052-92e6-6631f905f138" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:36:40.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2860" for this suite. Apr 3 13:37:02.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:37:02.795: INFO: namespace projected-2860 deletion completed in 22.098262973s • [SLOW TEST:28.875 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:37:02.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 13:37:02.853: INFO: Waiting up to 5m0s for pod "downwardapi-volume-feba8d78-6127-415c-b708-a7d5a984c450" in namespace "downward-api-1133" to be "success or failure" Apr 3 13:37:02.856: INFO: Pod "downwardapi-volume-feba8d78-6127-415c-b708-a7d5a984c450": Phase="Pending", Reason="", readiness=false. Elapsed: 3.395047ms Apr 3 13:37:04.860: INFO: Pod "downwardapi-volume-feba8d78-6127-415c-b708-a7d5a984c450": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007364974s Apr 3 13:37:06.864: INFO: Pod "downwardapi-volume-feba8d78-6127-415c-b708-a7d5a984c450": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011010156s STEP: Saw pod success Apr 3 13:37:06.864: INFO: Pod "downwardapi-volume-feba8d78-6127-415c-b708-a7d5a984c450" satisfied condition "success or failure" Apr 3 13:37:06.867: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-feba8d78-6127-415c-b708-a7d5a984c450 container client-container: STEP: delete the pod Apr 3 13:37:06.882: INFO: Waiting for pod downwardapi-volume-feba8d78-6127-415c-b708-a7d5a984c450 to disappear Apr 3 13:37:06.922: INFO: Pod downwardapi-volume-feba8d78-6127-415c-b708-a7d5a984c450 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:37:06.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1133" for this suite. Apr 3 13:37:12.938: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:37:13.024: INFO: namespace downward-api-1133 deletion completed in 6.099686422s • [SLOW TEST:10.229 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:37:13.025: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 3 13:37:13.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9817' Apr 3 13:37:15.583: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 3 13:37:15.583: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Apr 3 13:37:15.633: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-m6rqt] Apr 3 13:37:15.633: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-m6rqt" in namespace "kubectl-9817" to be "running and ready" Apr 3 13:37:15.662: INFO: Pod "e2e-test-nginx-rc-m6rqt": Phase="Pending", Reason="", readiness=false. Elapsed: 28.480348ms Apr 3 13:37:17.666: INFO: Pod "e2e-test-nginx-rc-m6rqt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032297629s Apr 3 13:37:19.670: INFO: Pod "e2e-test-nginx-rc-m6rqt": Phase="Running", Reason="", readiness=true. Elapsed: 4.03665442s Apr 3 13:37:19.670: INFO: Pod "e2e-test-nginx-rc-m6rqt" satisfied condition "running and ready" Apr 3 13:37:19.670: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-m6rqt] Apr 3 13:37:19.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-9817' Apr 3 13:37:19.792: INFO: stderr: "" Apr 3 13:37:19.792: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Apr 3 13:37:19.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9817' Apr 3 13:37:19.889: INFO: stderr: "" Apr 3 13:37:19.889: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:37:19.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9817" for this suite. Apr 3 13:37:25.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:37:25.984: INFO: namespace kubectl-9817 deletion completed in 6.091195997s • [SLOW TEST:12.959 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:37:25.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-d82a010c-68be-42d8-a26f-e6dafc9fbe3d STEP: Creating a pod to test consume secrets Apr 3 13:37:26.048: INFO: Waiting up to 5m0s for pod "pod-secrets-9f9b86be-8a50-4eeb-9859-9268077b1e09" in namespace "secrets-4150" to be "success or failure" Apr 3 13:37:26.089: INFO: Pod "pod-secrets-9f9b86be-8a50-4eeb-9859-9268077b1e09": Phase="Pending", Reason="", readiness=false. Elapsed: 41.033789ms Apr 3 13:37:28.094: INFO: Pod "pod-secrets-9f9b86be-8a50-4eeb-9859-9268077b1e09": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045577185s Apr 3 13:37:30.098: INFO: Pod "pod-secrets-9f9b86be-8a50-4eeb-9859-9268077b1e09": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050058757s STEP: Saw pod success Apr 3 13:37:30.099: INFO: Pod "pod-secrets-9f9b86be-8a50-4eeb-9859-9268077b1e09" satisfied condition "success or failure" Apr 3 13:37:30.102: INFO: Trying to get logs from node iruya-worker pod pod-secrets-9f9b86be-8a50-4eeb-9859-9268077b1e09 container secret-volume-test: STEP: delete the pod Apr 3 13:37:30.119: INFO: Waiting for pod pod-secrets-9f9b86be-8a50-4eeb-9859-9268077b1e09 to disappear Apr 3 13:37:30.130: INFO: Pod pod-secrets-9f9b86be-8a50-4eeb-9859-9268077b1e09 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:37:30.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4150" for this suite. Apr 3 13:37:36.152: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:37:36.231: INFO: namespace secrets-4150 deletion completed in 6.097910386s • [SLOW TEST:10.247 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:37:36.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Apr 3 13:37:36.312: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:37:52.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8354" for this suite. Apr 3 13:37:58.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:37:58.261: INFO: namespace pods-8354 deletion completed in 6.095010527s • [SLOW TEST:22.029 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:37:58.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 13:37:58.324: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5dd02bbc-9a0d-4217-974f-14fd63ba8d04" in namespace "downward-api-3674" to be "success or failure" Apr 3 13:37:58.328: INFO: Pod "downwardapi-volume-5dd02bbc-9a0d-4217-974f-14fd63ba8d04": Phase="Pending", Reason="", readiness=false. Elapsed: 3.484302ms Apr 3 13:38:00.332: INFO: Pod "downwardapi-volume-5dd02bbc-9a0d-4217-974f-14fd63ba8d04": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007645571s Apr 3 13:38:02.336: INFO: Pod "downwardapi-volume-5dd02bbc-9a0d-4217-974f-14fd63ba8d04": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011862099s STEP: Saw pod success Apr 3 13:38:02.336: INFO: Pod "downwardapi-volume-5dd02bbc-9a0d-4217-974f-14fd63ba8d04" satisfied condition "success or failure" Apr 3 13:38:02.340: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-5dd02bbc-9a0d-4217-974f-14fd63ba8d04 container client-container: STEP: delete the pod Apr 3 13:38:02.366: INFO: Waiting for pod downwardapi-volume-5dd02bbc-9a0d-4217-974f-14fd63ba8d04 to disappear Apr 3 13:38:02.376: INFO: Pod downwardapi-volume-5dd02bbc-9a0d-4217-974f-14fd63ba8d04 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:38:02.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3674" for this suite. Apr 3 13:38:08.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:38:08.502: INFO: namespace downward-api-3674 deletion completed in 6.123725719s • [SLOW TEST:10.241 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:38:08.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 3 13:38:13.119: INFO: Successfully updated pod "labelsupdate76ed366b-9b4c-459a-a772-af5becf256fc" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:38:15.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5635" for this suite. Apr 3 13:38:37.191: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:38:37.274: INFO: namespace downward-api-5635 deletion completed in 22.111539501s • [SLOW TEST:28.771 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:38:37.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-29b19fa3-917b-47a4-baea-dc7f83bf448e STEP: Creating a pod to test consume configMaps Apr 3 13:38:37.419: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0e9f2c0a-04eb-4443-ab63-3eca757f87f9" in namespace "projected-1729" to be "success or failure" Apr 3 13:38:37.421: INFO: Pod "pod-projected-configmaps-0e9f2c0a-04eb-4443-ab63-3eca757f87f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.700782ms Apr 3 13:38:39.444: INFO: Pod "pod-projected-configmaps-0e9f2c0a-04eb-4443-ab63-3eca757f87f9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025026576s Apr 3 13:38:41.448: INFO: Pod "pod-projected-configmaps-0e9f2c0a-04eb-4443-ab63-3eca757f87f9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029398474s STEP: Saw pod success Apr 3 13:38:41.448: INFO: Pod "pod-projected-configmaps-0e9f2c0a-04eb-4443-ab63-3eca757f87f9" satisfied condition "success or failure" Apr 3 13:38:41.451: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-0e9f2c0a-04eb-4443-ab63-3eca757f87f9 container projected-configmap-volume-test: STEP: delete the pod Apr 3 13:38:41.485: INFO: Waiting for pod pod-projected-configmaps-0e9f2c0a-04eb-4443-ab63-3eca757f87f9 to disappear Apr 3 13:38:41.526: INFO: Pod pod-projected-configmaps-0e9f2c0a-04eb-4443-ab63-3eca757f87f9 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:38:41.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1729" for this suite. Apr 3 13:38:47.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:38:47.762: INFO: namespace projected-1729 deletion completed in 6.23073519s • [SLOW TEST:10.488 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:38:47.762: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Apr 3 13:38:47.852: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:38:47.861: INFO: Number of nodes with available pods: 0 Apr 3 13:38:47.861: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:38:48.866: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:38:48.870: INFO: Number of nodes with available pods: 0 Apr 3 13:38:48.870: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:38:49.866: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:38:49.870: INFO: Number of nodes with available pods: 0 Apr 3 13:38:49.870: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:38:50.866: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:38:50.870: INFO: Number of nodes with available pods: 0 Apr 3 13:38:50.870: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:38:51.867: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:38:51.870: INFO: Number of nodes with available pods: 2 Apr 3 13:38:51.870: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Apr 3 13:38:51.966: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:38:51.970: INFO: Number of nodes with available pods: 1 Apr 3 13:38:51.970: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:38:52.974: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:38:52.977: INFO: Number of nodes with available pods: 1 Apr 3 13:38:52.977: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:38:53.975: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:38:53.978: INFO: Number of nodes with available pods: 1 Apr 3 13:38:53.978: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:38:54.975: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:38:54.978: INFO: Number of nodes with available pods: 1 Apr 3 13:38:54.978: INFO: Node iruya-worker is running more than one daemon pod Apr 3 13:38:55.975: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node Apr 3 13:38:55.978: INFO: Number of nodes with available pods: 2 Apr 3 13:38:55.978: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4089, will wait for the garbage collector to delete the pods Apr 3 13:38:56.040: INFO: Deleting DaemonSet.extensions daemon-set took: 6.343839ms Apr 3 13:38:56.340: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.266135ms Apr 3 13:39:02.244: INFO: Number of nodes with available pods: 0 Apr 3 13:39:02.244: INFO: Number of running nodes: 0, number of available pods: 0 Apr 3 13:39:02.247: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4089/daemonsets","resourceVersion":"3398916"},"items":null} Apr 3 13:39:02.250: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4089/pods","resourceVersion":"3398916"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:39:02.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-4089" for this suite. Apr 3 13:39:08.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:39:08.354: INFO: namespace daemonsets-4089 deletion completed in 6.091201637s • [SLOW TEST:20.592 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:39:08.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 13:39:08.411: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 5.823411ms) Apr 3 13:39:08.414: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.198728ms) Apr 3 13:39:08.439: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 24.671615ms) Apr 3 13:39:08.442: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.847167ms) Apr 3 13:39:08.446: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.566238ms) Apr 3 13:39:08.450: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.742155ms) Apr 3 13:39:08.454: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 4.540912ms) Apr 3 13:39:08.458: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.563431ms) Apr 3 13:39:08.462: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.791356ms) Apr 3 13:39:08.465: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.254479ms) Apr 3 13:39:08.468: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.112896ms) Apr 3 13:39:08.471: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.07691ms) Apr 3 13:39:08.474: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.040303ms) Apr 3 13:39:08.478: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.000155ms) Apr 3 13:39:08.480: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.790959ms) Apr 3 13:39:08.484: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.183788ms) Apr 3 13:39:08.487: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.092947ms) Apr 3 13:39:08.490: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.067817ms) Apr 3 13:39:08.493: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.602088ms) Apr 3 13:39:08.497: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.434487ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:39:08.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3997" for this suite. Apr 3 13:39:14.516: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:39:14.591: INFO: namespace proxy-3997 deletion completed in 6.09124688s • [SLOW TEST:6.238 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:39:14.592: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token Apr 3 13:39:15.202: INFO: created pod pod-service-account-defaultsa Apr 3 13:39:15.202: INFO: pod pod-service-account-defaultsa service account token volume mount: true Apr 3 13:39:15.225: INFO: created pod pod-service-account-mountsa Apr 3 13:39:15.225: INFO: pod pod-service-account-mountsa service account token volume mount: true Apr 3 13:39:15.258: INFO: created pod pod-service-account-nomountsa Apr 3 13:39:15.258: INFO: pod pod-service-account-nomountsa service account token volume mount: false Apr 3 13:39:15.288: INFO: created pod pod-service-account-defaultsa-mountspec Apr 3 13:39:15.288: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Apr 3 13:39:15.325: INFO: created pod pod-service-account-mountsa-mountspec Apr 3 13:39:15.325: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Apr 3 13:39:15.334: INFO: created pod pod-service-account-nomountsa-mountspec Apr 3 13:39:15.334: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Apr 3 13:39:15.362: INFO: created pod pod-service-account-defaultsa-nomountspec Apr 3 13:39:15.362: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Apr 3 13:39:15.415: INFO: created pod pod-service-account-mountsa-nomountspec Apr 3 13:39:15.415: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Apr 3 13:39:15.420: INFO: created pod pod-service-account-nomountsa-nomountspec Apr 3 13:39:15.420: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:39:15.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-5088" for this suite. Apr 3 13:39:41.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:39:41.667: INFO: namespace svcaccounts-5088 deletion completed in 26.221458015s • [SLOW TEST:27.075 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:39:41.668: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes Apr 3 13:39:45.799: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Apr 3 13:39:55.890: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:39:55.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7367" for this suite. Apr 3 13:40:01.920: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:40:02.008: INFO: namespace pods-7367 deletion completed in 6.111485362s • [SLOW TEST:20.340 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:40:02.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 3 13:40:06.634: INFO: Successfully updated pod "annotationupdate97dc42b6-2ae6-4383-a6d7-12616ec84aef" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:40:08.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2630" for this suite. Apr 3 13:40:30.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:40:30.774: INFO: namespace projected-2630 deletion completed in 22.101037546s • [SLOW TEST:28.766 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:40:30.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults Apr 3 13:40:30.849: INFO: Waiting up to 5m0s for pod "client-containers-52df22ac-facb-4332-b932-0d06fb267f3b" in namespace "containers-3474" to be "success or failure" Apr 3 13:40:30.874: INFO: Pod "client-containers-52df22ac-facb-4332-b932-0d06fb267f3b": Phase="Pending", Reason="", readiness=false. Elapsed: 24.591522ms Apr 3 13:40:32.877: INFO: Pod "client-containers-52df22ac-facb-4332-b932-0d06fb267f3b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027967008s Apr 3 13:40:34.882: INFO: Pod "client-containers-52df22ac-facb-4332-b932-0d06fb267f3b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032732616s STEP: Saw pod success Apr 3 13:40:34.882: INFO: Pod "client-containers-52df22ac-facb-4332-b932-0d06fb267f3b" satisfied condition "success or failure" Apr 3 13:40:34.886: INFO: Trying to get logs from node iruya-worker pod client-containers-52df22ac-facb-4332-b932-0d06fb267f3b container test-container: STEP: delete the pod Apr 3 13:40:34.944: INFO: Waiting for pod client-containers-52df22ac-facb-4332-b932-0d06fb267f3b to disappear Apr 3 13:40:34.953: INFO: Pod client-containers-52df22ac-facb-4332-b932-0d06fb267f3b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:40:34.953: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-3474" for this suite. Apr 3 13:40:40.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:40:41.032: INFO: namespace containers-3474 deletion completed in 6.075607597s • [SLOW TEST:10.257 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:40:41.033: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-19245358-9ec2-4d8e-a341-7ff565dd5c2c STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:40:45.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4842" for this suite. Apr 3 13:41:07.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:41:07.240: INFO: namespace configmap-4842 deletion completed in 22.095392138s • [SLOW TEST:26.207 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:41:07.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:41:11.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1307" for this suite. Apr 3 13:41:17.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:41:17.391: INFO: namespace kubelet-test-1307 deletion completed in 6.088720994s • [SLOW TEST:10.151 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:41:17.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Apr 3 13:41:17.443: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Apr 3 13:41:17.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5486' Apr 3 13:41:17.780: INFO: stderr: "" Apr 3 13:41:17.780: INFO: stdout: "service/redis-slave created\n" Apr 3 13:41:17.780: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Apr 3 13:41:17.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5486' Apr 3 13:41:18.045: INFO: stderr: "" Apr 3 13:41:18.045: INFO: stdout: "service/redis-master created\n" Apr 3 13:41:18.045: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Apr 3 13:41:18.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5486' Apr 3 13:41:18.356: INFO: stderr: "" Apr 3 13:41:18.356: INFO: stdout: "service/frontend created\n" Apr 3 13:41:18.356: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Apr 3 13:41:18.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5486' Apr 3 13:41:18.605: INFO: stderr: "" Apr 3 13:41:18.605: INFO: stdout: "deployment.apps/frontend created\n" Apr 3 13:41:18.605: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Apr 3 13:41:18.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5486' Apr 3 13:41:18.966: INFO: stderr: "" Apr 3 13:41:18.966: INFO: stdout: "deployment.apps/redis-master created\n" Apr 3 13:41:18.966: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Apr 3 13:41:18.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5486' Apr 3 13:41:19.231: INFO: stderr: "" Apr 3 13:41:19.232: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Apr 3 13:41:19.232: INFO: Waiting for all frontend pods to be Running. Apr 3 13:41:24.282: INFO: Waiting for frontend to serve content. Apr 3 13:41:25.324: INFO: Trying to add a new entry to the guestbook. Apr 3 13:41:25.342: INFO: Verifying that added entry can be retrieved. Apr 3 13:41:25.365: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources Apr 3 13:41:30.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5486' Apr 3 13:41:30.539: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 13:41:30.539: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Apr 3 13:41:30.539: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5486' Apr 3 13:41:30.675: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 13:41:30.675: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 3 13:41:30.675: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5486' Apr 3 13:41:30.794: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 13:41:30.794: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 3 13:41:30.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5486' Apr 3 13:41:30.892: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 13:41:30.892: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Apr 3 13:41:30.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5486' Apr 3 13:41:30.986: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 13:41:30.986: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Apr 3 13:41:30.986: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5486' Apr 3 13:41:31.076: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 13:41:31.076: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:41:31.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5486" for this suite. Apr 3 13:42:13.088: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:42:13.167: INFO: namespace kubectl-5486 deletion completed in 42.088504812s • [SLOW TEST:55.776 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:42:13.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-700479af-1964-439a-80e2-d329fd40e22d [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:42:13.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8875" for this suite. Apr 3 13:42:19.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:42:19.387: INFO: namespace configmap-8875 deletion completed in 6.1254173s • [SLOW TEST:6.219 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:42:19.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 13:42:19.449: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a65366ba-11c7-4efd-b17d-5f7dba07bbfa" in namespace "projected-5738" to be "success or failure" Apr 3 13:42:19.452: INFO: Pod "downwardapi-volume-a65366ba-11c7-4efd-b17d-5f7dba07bbfa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.147515ms Apr 3 13:42:21.477: INFO: Pod "downwardapi-volume-a65366ba-11c7-4efd-b17d-5f7dba07bbfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028452642s Apr 3 13:42:23.489: INFO: Pod "downwardapi-volume-a65366ba-11c7-4efd-b17d-5f7dba07bbfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040456727s STEP: Saw pod success Apr 3 13:42:23.489: INFO: Pod "downwardapi-volume-a65366ba-11c7-4efd-b17d-5f7dba07bbfa" satisfied condition "success or failure" Apr 3 13:42:23.492: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a65366ba-11c7-4efd-b17d-5f7dba07bbfa container client-container: STEP: delete the pod Apr 3 13:42:23.512: INFO: Waiting for pod downwardapi-volume-a65366ba-11c7-4efd-b17d-5f7dba07bbfa to disappear Apr 3 13:42:23.523: INFO: Pod downwardapi-volume-a65366ba-11c7-4efd-b17d-5f7dba07bbfa no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:42:23.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5738" for this suite. Apr 3 13:42:29.550: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:42:29.627: INFO: namespace projected-5738 deletion completed in 6.100835099s • [SLOW TEST:10.240 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:42:29.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:42:35.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3977" for this suite. Apr 3 13:42:41.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:42:41.999: INFO: namespace namespaces-3977 deletion completed in 6.087806148s STEP: Destroying namespace "nsdeletetest-8357" for this suite. Apr 3 13:42:42.002: INFO: Namespace nsdeletetest-8357 was already deleted STEP: Destroying namespace "nsdeletetest-6644" for this suite. Apr 3 13:42:48.014: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:42:48.092: INFO: namespace nsdeletetest-6644 deletion completed in 6.090097572s • [SLOW TEST:18.464 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:42:48.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 3 13:42:48.148: INFO: Waiting up to 5m0s for pod "downward-api-dde1b707-755d-48e4-a360-9872ff2c6afd" in namespace "downward-api-8780" to be "success or failure" Apr 3 13:42:48.160: INFO: Pod "downward-api-dde1b707-755d-48e4-a360-9872ff2c6afd": Phase="Pending", Reason="", readiness=false. Elapsed: 11.712385ms Apr 3 13:42:50.180: INFO: Pod "downward-api-dde1b707-755d-48e4-a360-9872ff2c6afd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031506748s Apr 3 13:42:52.184: INFO: Pod "downward-api-dde1b707-755d-48e4-a360-9872ff2c6afd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036014898s STEP: Saw pod success Apr 3 13:42:52.184: INFO: Pod "downward-api-dde1b707-755d-48e4-a360-9872ff2c6afd" satisfied condition "success or failure" Apr 3 13:42:52.188: INFO: Trying to get logs from node iruya-worker pod downward-api-dde1b707-755d-48e4-a360-9872ff2c6afd container dapi-container: STEP: delete the pod Apr 3 13:42:52.209: INFO: Waiting for pod downward-api-dde1b707-755d-48e4-a360-9872ff2c6afd to disappear Apr 3 13:42:52.214: INFO: Pod downward-api-dde1b707-755d-48e4-a360-9872ff2c6afd no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:42:52.214: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8780" for this suite. Apr 3 13:42:58.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:42:58.317: INFO: namespace downward-api-8780 deletion completed in 6.100489748s • [SLOW TEST:10.225 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:42:58.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-a4b3fdf7-339a-4b86-a6ee-e6c6ed04d006 STEP: Creating a pod to test consume secrets Apr 3 13:42:58.424: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9d907369-c2ec-4b50-8df6-d4eb29bad432" in namespace "projected-5046" to be "success or failure" Apr 3 13:42:58.439: INFO: Pod "pod-projected-secrets-9d907369-c2ec-4b50-8df6-d4eb29bad432": Phase="Pending", Reason="", readiness=false. Elapsed: 15.21267ms Apr 3 13:43:00.443: INFO: Pod "pod-projected-secrets-9d907369-c2ec-4b50-8df6-d4eb29bad432": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018919869s Apr 3 13:43:02.447: INFO: Pod "pod-projected-secrets-9d907369-c2ec-4b50-8df6-d4eb29bad432": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023483536s STEP: Saw pod success Apr 3 13:43:02.447: INFO: Pod "pod-projected-secrets-9d907369-c2ec-4b50-8df6-d4eb29bad432" satisfied condition "success or failure" Apr 3 13:43:02.450: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-9d907369-c2ec-4b50-8df6-d4eb29bad432 container projected-secret-volume-test: STEP: delete the pod Apr 3 13:43:02.484: INFO: Waiting for pod pod-projected-secrets-9d907369-c2ec-4b50-8df6-d4eb29bad432 to disappear Apr 3 13:43:02.494: INFO: Pod pod-projected-secrets-9d907369-c2ec-4b50-8df6-d4eb29bad432 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:43:02.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5046" for this suite. Apr 3 13:43:08.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:43:08.608: INFO: namespace projected-5046 deletion completed in 6.108783733s • [SLOW TEST:10.290 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:43:08.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 13:43:08.700: INFO: Creating ReplicaSet my-hostname-basic-7dfeb786-94a6-4902-a5ce-a20994915b7f Apr 3 13:43:08.716: INFO: Pod name my-hostname-basic-7dfeb786-94a6-4902-a5ce-a20994915b7f: Found 0 pods out of 1 Apr 3 13:43:13.720: INFO: Pod name my-hostname-basic-7dfeb786-94a6-4902-a5ce-a20994915b7f: Found 1 pods out of 1 Apr 3 13:43:13.720: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-7dfeb786-94a6-4902-a5ce-a20994915b7f" is running Apr 3 13:43:13.723: INFO: Pod "my-hostname-basic-7dfeb786-94a6-4902-a5ce-a20994915b7f-jshgs" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 13:43:08 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 13:43:11 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 13:43:11 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 13:43:08 +0000 UTC Reason: Message:}]) Apr 3 13:43:13.723: INFO: Trying to dial the pod Apr 3 13:43:18.734: INFO: Controller my-hostname-basic-7dfeb786-94a6-4902-a5ce-a20994915b7f: Got expected result from replica 1 [my-hostname-basic-7dfeb786-94a6-4902-a5ce-a20994915b7f-jshgs]: "my-hostname-basic-7dfeb786-94a6-4902-a5ce-a20994915b7f-jshgs", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:43:18.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-1449" for this suite. Apr 3 13:43:24.751: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:43:24.846: INFO: namespace replicaset-1449 deletion completed in 6.107883912s • [SLOW TEST:16.238 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:43:24.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:43:24.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-8587" for this suite. Apr 3 13:43:30.919: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:43:30.996: INFO: namespace services-8587 deletion completed in 6.089343218s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.150 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:43:30.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-8c76f888-9a67-4136-bca7-d49a65c511e0 STEP: Creating a pod to test consume configMaps Apr 3 13:43:31.061: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b36271b4-440a-4a30-9312-f7e702270882" in namespace "projected-6806" to be "success or failure" Apr 3 13:43:31.065: INFO: Pod "pod-projected-configmaps-b36271b4-440a-4a30-9312-f7e702270882": Phase="Pending", Reason="", readiness=false. Elapsed: 3.87429ms Apr 3 13:43:33.069: INFO: Pod "pod-projected-configmaps-b36271b4-440a-4a30-9312-f7e702270882": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008018199s Apr 3 13:43:35.073: INFO: Pod "pod-projected-configmaps-b36271b4-440a-4a30-9312-f7e702270882": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012195012s STEP: Saw pod success Apr 3 13:43:35.073: INFO: Pod "pod-projected-configmaps-b36271b4-440a-4a30-9312-f7e702270882" satisfied condition "success or failure" Apr 3 13:43:35.077: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-b36271b4-440a-4a30-9312-f7e702270882 container projected-configmap-volume-test: STEP: delete the pod Apr 3 13:43:35.096: INFO: Waiting for pod pod-projected-configmaps-b36271b4-440a-4a30-9312-f7e702270882 to disappear Apr 3 13:43:35.111: INFO: Pod pod-projected-configmaps-b36271b4-440a-4a30-9312-f7e702270882 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:43:35.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6806" for this suite. Apr 3 13:43:41.157: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:43:41.237: INFO: namespace projected-6806 deletion completed in 6.122636763s • [SLOW TEST:10.241 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:43:41.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium Apr 3 13:43:41.420: INFO: Waiting up to 5m0s for pod "pod-287f6492-0369-4f88-97bf-f8bd0fc6fdbf" in namespace "emptydir-4636" to be "success or failure" Apr 3 13:43:41.446: INFO: Pod "pod-287f6492-0369-4f88-97bf-f8bd0fc6fdbf": Phase="Pending", Reason="", readiness=false. Elapsed: 25.847747ms Apr 3 13:43:43.450: INFO: Pod "pod-287f6492-0369-4f88-97bf-f8bd0fc6fdbf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030070962s Apr 3 13:43:45.455: INFO: Pod "pod-287f6492-0369-4f88-97bf-f8bd0fc6fdbf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.035017556s STEP: Saw pod success Apr 3 13:43:45.455: INFO: Pod "pod-287f6492-0369-4f88-97bf-f8bd0fc6fdbf" satisfied condition "success or failure" Apr 3 13:43:45.463: INFO: Trying to get logs from node iruya-worker pod pod-287f6492-0369-4f88-97bf-f8bd0fc6fdbf container test-container: STEP: delete the pod Apr 3 13:43:45.500: INFO: Waiting for pod pod-287f6492-0369-4f88-97bf-f8bd0fc6fdbf to disappear Apr 3 13:43:45.537: INFO: Pod pod-287f6492-0369-4f88-97bf-f8bd0fc6fdbf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:43:45.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4636" for this suite. Apr 3 13:43:51.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:43:51.633: INFO: namespace emptydir-4636 deletion completed in 6.092192056s • [SLOW TEST:10.395 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:43:51.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-b270af54-cbc2-4fb0-a656-49c307c284f7 STEP: Creating a pod to test consume secrets Apr 3 13:43:51.701: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f0eb288a-34db-411e-a74e-b6eccaa6e04c" in namespace "projected-7676" to be "success or failure" Apr 3 13:43:51.715: INFO: Pod "pod-projected-secrets-f0eb288a-34db-411e-a74e-b6eccaa6e04c": Phase="Pending", Reason="", readiness=false. Elapsed: 13.920182ms Apr 3 13:43:53.719: INFO: Pod "pod-projected-secrets-f0eb288a-34db-411e-a74e-b6eccaa6e04c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017577017s Apr 3 13:43:55.723: INFO: Pod "pod-projected-secrets-f0eb288a-34db-411e-a74e-b6eccaa6e04c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021888626s STEP: Saw pod success Apr 3 13:43:55.723: INFO: Pod "pod-projected-secrets-f0eb288a-34db-411e-a74e-b6eccaa6e04c" satisfied condition "success or failure" Apr 3 13:43:55.726: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-f0eb288a-34db-411e-a74e-b6eccaa6e04c container secret-volume-test: STEP: delete the pod Apr 3 13:43:55.758: INFO: Waiting for pod pod-projected-secrets-f0eb288a-34db-411e-a74e-b6eccaa6e04c to disappear Apr 3 13:43:55.766: INFO: Pod pod-projected-secrets-f0eb288a-34db-411e-a74e-b6eccaa6e04c no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:43:55.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7676" for this suite. Apr 3 13:44:01.782: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:44:01.884: INFO: namespace projected-7676 deletion completed in 6.115212901s • [SLOW TEST:10.252 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:44:01.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 3 13:44:01.917: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 3 13:44:01.926: INFO: Waiting for terminating namespaces to be deleted... Apr 3 13:44:01.929: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 3 13:44:01.933: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 3 13:44:01.933: INFO: Container kube-proxy ready: true, restart count 0 Apr 3 13:44:01.933: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 3 13:44:01.933: INFO: Container kindnet-cni ready: true, restart count 0 Apr 3 13:44:01.933: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 3 13:44:01.959: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 3 13:44:01.959: INFO: Container kube-proxy ready: true, restart count 0 Apr 3 13:44:01.959: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 3 13:44:01.959: INFO: Container kindnet-cni ready: true, restart count 0 Apr 3 13:44:01.959: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 3 13:44:01.959: INFO: Container coredns ready: true, restart count 0 Apr 3 13:44:01.959: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 3 13:44:01.959: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 Apr 3 13:44:02.017: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 Apr 3 13:44:02.017: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 Apr 3 13:44:02.017: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker Apr 3 13:44:02.017: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 Apr 3 13:44:02.017: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker Apr 3 13:44:02.017: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-30bc31fc-229c-4d6f-ac21-eabffd02be84.1602532621e7993e], Reason = [Scheduled], Message = [Successfully assigned sched-pred-356/filler-pod-30bc31fc-229c-4d6f-ac21-eabffd02be84 to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-30bc31fc-229c-4d6f-ac21-eabffd02be84.160253266c3f2ba1], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-30bc31fc-229c-4d6f-ac21-eabffd02be84.16025326c3c1d009], Reason = [Created], Message = [Created container filler-pod-30bc31fc-229c-4d6f-ac21-eabffd02be84] STEP: Considering event: Type = [Normal], Name = [filler-pod-30bc31fc-229c-4d6f-ac21-eabffd02be84.16025326d9504ba4], Reason = [Started], Message = [Started container filler-pod-30bc31fc-229c-4d6f-ac21-eabffd02be84] STEP: Considering event: Type = [Normal], Name = [filler-pod-8f08ddfe-6c93-4383-af08-4bbb1db52432.160253262342b8d4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-356/filler-pod-8f08ddfe-6c93-4383-af08-4bbb1db52432 to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-8f08ddfe-6c93-4383-af08-4bbb1db52432.16025326a411ed36], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-8f08ddfe-6c93-4383-af08-4bbb1db52432.16025326dd305e52], Reason = [Created], Message = [Created container filler-pod-8f08ddfe-6c93-4383-af08-4bbb1db52432] STEP: Considering event: Type = [Normal], Name = [filler-pod-8f08ddfe-6c93-4383-af08-4bbb1db52432.16025326eb833b4e], Reason = [Started], Message = [Started container filler-pod-8f08ddfe-6c93-4383-af08-4bbb1db52432] STEP: Considering event: Type = [Warning], Name = [additional-pod.160253278a47767c], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:44:09.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-356" for this suite. Apr 3 13:44:15.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:44:15.208: INFO: namespace sched-pred-356 deletion completed in 6.070763287s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.323 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:44:15.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC Apr 3 13:44:15.258: INFO: namespace kubectl-9632 Apr 3 13:44:15.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9632' Apr 3 13:44:15.504: INFO: stderr: "" Apr 3 13:44:15.504: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. Apr 3 13:44:16.509: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:44:16.509: INFO: Found 0 / 1 Apr 3 13:44:17.509: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:44:17.509: INFO: Found 0 / 1 Apr 3 13:44:18.508: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:44:18.509: INFO: Found 1 / 1 Apr 3 13:44:18.509: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 3 13:44:18.512: INFO: Selector matched 1 pods for map[app:redis] Apr 3 13:44:18.512: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Apr 3 13:44:18.512: INFO: wait on redis-master startup in kubectl-9632 Apr 3 13:44:18.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-h2h6k redis-master --namespace=kubectl-9632' Apr 3 13:44:18.626: INFO: stderr: "" Apr 3 13:44:18.627: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 03 Apr 13:44:17.971 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 Apr 13:44:17.971 # Server started, Redis version 3.2.12\n1:M 03 Apr 13:44:17.971 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 Apr 13:44:17.971 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC Apr 3 13:44:18.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9632' Apr 3 13:44:18.768: INFO: stderr: "" Apr 3 13:44:18.768: INFO: stdout: "service/rm2 exposed\n" Apr 3 13:44:18.801: INFO: Service rm2 in namespace kubectl-9632 found. STEP: exposing service Apr 3 13:44:20.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9632' Apr 3 13:44:20.953: INFO: stderr: "" Apr 3 13:44:20.953: INFO: stdout: "service/rm3 exposed\n" Apr 3 13:44:20.958: INFO: Service rm3 in namespace kubectl-9632 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:44:22.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9632" for this suite. Apr 3 13:44:44.980: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:44:45.055: INFO: namespace kubectl-9632 deletion completed in 22.086505909s • [SLOW TEST:29.848 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:44:45.056: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium Apr 3 13:44:45.106: INFO: Waiting up to 5m0s for pod "pod-687e3cfc-f19b-4727-bec5-ee907ab181d4" in namespace "emptydir-2763" to be "success or failure" Apr 3 13:44:45.115: INFO: Pod "pod-687e3cfc-f19b-4727-bec5-ee907ab181d4": Phase="Pending", Reason="", readiness=false. Elapsed: 9.279756ms Apr 3 13:44:47.119: INFO: Pod "pod-687e3cfc-f19b-4727-bec5-ee907ab181d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013059422s Apr 3 13:44:49.123: INFO: Pod "pod-687e3cfc-f19b-4727-bec5-ee907ab181d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017149647s STEP: Saw pod success Apr 3 13:44:49.123: INFO: Pod "pod-687e3cfc-f19b-4727-bec5-ee907ab181d4" satisfied condition "success or failure" Apr 3 13:44:49.126: INFO: Trying to get logs from node iruya-worker2 pod pod-687e3cfc-f19b-4727-bec5-ee907ab181d4 container test-container: STEP: delete the pod Apr 3 13:44:49.165: INFO: Waiting for pod pod-687e3cfc-f19b-4727-bec5-ee907ab181d4 to disappear Apr 3 13:44:49.169: INFO: Pod pod-687e3cfc-f19b-4727-bec5-ee907ab181d4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:44:49.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2763" for this suite. Apr 3 13:44:55.197: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:44:55.267: INFO: namespace emptydir-2763 deletion completed in 6.094568442s • [SLOW TEST:10.210 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:44:55.267: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:44:59.376: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-1686" for this suite. Apr 3 13:45:45.394: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:45:45.493: INFO: namespace kubelet-test-1686 deletion completed in 46.111606085s • [SLOW TEST:50.225 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:45:45.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 13:45:45.563: INFO: Waiting up to 5m0s for pod "downwardapi-volume-519b9e99-c374-423e-8cd8-601a446b92b0" in namespace "projected-3326" to be "success or failure" Apr 3 13:45:45.571: INFO: Pod "downwardapi-volume-519b9e99-c374-423e-8cd8-601a446b92b0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.066094ms Apr 3 13:45:47.574: INFO: Pod "downwardapi-volume-519b9e99-c374-423e-8cd8-601a446b92b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011245203s Apr 3 13:45:49.578: INFO: Pod "downwardapi-volume-519b9e99-c374-423e-8cd8-601a446b92b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015555048s STEP: Saw pod success Apr 3 13:45:49.578: INFO: Pod "downwardapi-volume-519b9e99-c374-423e-8cd8-601a446b92b0" satisfied condition "success or failure" Apr 3 13:45:49.582: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-519b9e99-c374-423e-8cd8-601a446b92b0 container client-container: STEP: delete the pod Apr 3 13:45:49.617: INFO: Waiting for pod downwardapi-volume-519b9e99-c374-423e-8cd8-601a446b92b0 to disappear Apr 3 13:45:49.637: INFO: Pod downwardapi-volume-519b9e99-c374-423e-8cd8-601a446b92b0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:45:49.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3326" for this suite. Apr 3 13:45:55.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:45:55.729: INFO: namespace projected-3326 deletion completed in 6.089156171s • [SLOW TEST:10.236 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:45:55.730: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 13:45:55.795: INFO: Waiting up to 5m0s for pod "downwardapi-volume-afc3a48a-3129-49b0-8e0f-652923092a44" in namespace "downward-api-7284" to be "success or failure" Apr 3 13:45:55.799: INFO: Pod "downwardapi-volume-afc3a48a-3129-49b0-8e0f-652923092a44": Phase="Pending", Reason="", readiness=false. Elapsed: 3.768872ms Apr 3 13:45:57.803: INFO: Pod "downwardapi-volume-afc3a48a-3129-49b0-8e0f-652923092a44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008141712s Apr 3 13:45:59.807: INFO: Pod "downwardapi-volume-afc3a48a-3129-49b0-8e0f-652923092a44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011616188s STEP: Saw pod success Apr 3 13:45:59.807: INFO: Pod "downwardapi-volume-afc3a48a-3129-49b0-8e0f-652923092a44" satisfied condition "success or failure" Apr 3 13:45:59.809: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-afc3a48a-3129-49b0-8e0f-652923092a44 container client-container: STEP: delete the pod Apr 3 13:45:59.851: INFO: Waiting for pod downwardapi-volume-afc3a48a-3129-49b0-8e0f-652923092a44 to disappear Apr 3 13:45:59.871: INFO: Pod downwardapi-volume-afc3a48a-3129-49b0-8e0f-652923092a44 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:45:59.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7284" for this suite. Apr 3 13:46:05.886: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:46:05.984: INFO: namespace downward-api-7284 deletion completed in 6.110774994s • [SLOW TEST:10.255 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:46:05.985: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars Apr 3 13:46:06.034: INFO: Waiting up to 5m0s for pod "downward-api-1434f81c-e0e7-46fd-b1e4-e6af73cdbd49" in namespace "downward-api-91" to be "success or failure" Apr 3 13:46:06.038: INFO: Pod "downward-api-1434f81c-e0e7-46fd-b1e4-e6af73cdbd49": Phase="Pending", Reason="", readiness=false. Elapsed: 3.605949ms Apr 3 13:46:08.041: INFO: Pod "downward-api-1434f81c-e0e7-46fd-b1e4-e6af73cdbd49": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007035519s Apr 3 13:46:10.045: INFO: Pod "downward-api-1434f81c-e0e7-46fd-b1e4-e6af73cdbd49": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010813194s STEP: Saw pod success Apr 3 13:46:10.045: INFO: Pod "downward-api-1434f81c-e0e7-46fd-b1e4-e6af73cdbd49" satisfied condition "success or failure" Apr 3 13:46:10.047: INFO: Trying to get logs from node iruya-worker2 pod downward-api-1434f81c-e0e7-46fd-b1e4-e6af73cdbd49 container dapi-container: STEP: delete the pod Apr 3 13:46:10.063: INFO: Waiting for pod downward-api-1434f81c-e0e7-46fd-b1e4-e6af73cdbd49 to disappear Apr 3 13:46:10.078: INFO: Pod downward-api-1434f81c-e0e7-46fd-b1e4-e6af73cdbd49 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:46:10.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-91" for this suite. Apr 3 13:46:16.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:46:16.184: INFO: namespace downward-api-91 deletion completed in 6.10305748s • [SLOW TEST:10.199 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:46:16.185: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 3 13:46:16.285: INFO: Waiting up to 5m0s for pod "pod-9d4a0b8e-5494-4a62-9f09-7e3ccaf09740" in namespace "emptydir-6992" to be "success or failure" Apr 3 13:46:16.290: INFO: Pod "pod-9d4a0b8e-5494-4a62-9f09-7e3ccaf09740": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088104ms Apr 3 13:46:18.437: INFO: Pod "pod-9d4a0b8e-5494-4a62-9f09-7e3ccaf09740": Phase="Pending", Reason="", readiness=false. Elapsed: 2.151503147s Apr 3 13:46:20.528: INFO: Pod "pod-9d4a0b8e-5494-4a62-9f09-7e3ccaf09740": Phase="Pending", Reason="", readiness=false. Elapsed: 4.242100928s Apr 3 13:46:22.532: INFO: Pod "pod-9d4a0b8e-5494-4a62-9f09-7e3ccaf09740": Phase="Pending", Reason="", readiness=false. Elapsed: 6.246513046s Apr 3 13:46:24.536: INFO: Pod "pod-9d4a0b8e-5494-4a62-9f09-7e3ccaf09740": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.250573245s STEP: Saw pod success Apr 3 13:46:24.536: INFO: Pod "pod-9d4a0b8e-5494-4a62-9f09-7e3ccaf09740" satisfied condition "success or failure" Apr 3 13:46:24.539: INFO: Trying to get logs from node iruya-worker pod pod-9d4a0b8e-5494-4a62-9f09-7e3ccaf09740 container test-container: STEP: delete the pod Apr 3 13:46:24.563: INFO: Waiting for pod pod-9d4a0b8e-5494-4a62-9f09-7e3ccaf09740 to disappear Apr 3 13:46:25.251: INFO: Pod pod-9d4a0b8e-5494-4a62-9f09-7e3ccaf09740 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:46:25.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6992" for this suite. Apr 3 13:46:31.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:46:31.447: INFO: namespace emptydir-6992 deletion completed in 6.193187052s • [SLOW TEST:15.262 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:46:31.448: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7600, will wait for the garbage collector to delete the pods Apr 3 13:46:35.559: INFO: Deleting Job.batch foo took: 5.693475ms Apr 3 13:46:35.859: INFO: Terminating Job.batch foo pods took: 300.2964ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:47:12.262: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7600" for this suite. Apr 3 13:47:18.276: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:47:18.353: INFO: namespace job-7600 deletion completed in 6.086902473s • [SLOW TEST:46.905 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:47:18.353: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-60c7fae7-7331-4e15-af17-7053cc2cb73c in namespace container-probe-5402 Apr 3 13:47:22.451: INFO: Started pod busybox-60c7fae7-7331-4e15-af17-7053cc2cb73c in namespace container-probe-5402 STEP: checking the pod's current state and verifying that restartCount is present Apr 3 13:47:22.454: INFO: Initial restart count of pod busybox-60c7fae7-7331-4e15-af17-7053cc2cb73c is 0 Apr 3 13:48:16.567: INFO: Restart count of pod container-probe-5402/busybox-60c7fae7-7331-4e15-af17-7053cc2cb73c is now 1 (54.112584511s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:48:16.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5402" for this suite. Apr 3 13:48:22.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:48:22.700: INFO: namespace container-probe-5402 deletion completed in 6.106277017s • [SLOW TEST:64.346 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:48:22.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 3 13:48:22.755: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:48:29.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1932" for this suite. Apr 3 13:48:35.439: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:48:35.517: INFO: namespace init-container-1932 deletion completed in 6.091836246s • [SLOW TEST:12.817 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:48:35.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 13:48:35.575: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3a2db6d-735a-4f25-b933-4fc4547554c6" in namespace "projected-5051" to be "success or failure" Apr 3 13:48:35.626: INFO: Pod "downwardapi-volume-f3a2db6d-735a-4f25-b933-4fc4547554c6": Phase="Pending", Reason="", readiness=false. Elapsed: 50.306607ms Apr 3 13:48:37.629: INFO: Pod "downwardapi-volume-f3a2db6d-735a-4f25-b933-4fc4547554c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053804434s Apr 3 13:48:39.633: INFO: Pod "downwardapi-volume-f3a2db6d-735a-4f25-b933-4fc4547554c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057765789s STEP: Saw pod success Apr 3 13:48:39.633: INFO: Pod "downwardapi-volume-f3a2db6d-735a-4f25-b933-4fc4547554c6" satisfied condition "success or failure" Apr 3 13:48:39.636: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-f3a2db6d-735a-4f25-b933-4fc4547554c6 container client-container: STEP: delete the pod Apr 3 13:48:39.668: INFO: Waiting for pod downwardapi-volume-f3a2db6d-735a-4f25-b933-4fc4547554c6 to disappear Apr 3 13:48:39.687: INFO: Pod downwardapi-volume-f3a2db6d-735a-4f25-b933-4fc4547554c6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:48:39.687: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5051" for this suite. Apr 3 13:48:45.727: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:48:45.822: INFO: namespace projected-5051 deletion completed in 6.111758801s • [SLOW TEST:10.304 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:48:45.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:48:45.935: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7210" for this suite. Apr 3 13:48:51.973: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:48:52.060: INFO: namespace kubelet-test-7210 deletion completed in 6.10286876s • [SLOW TEST:6.238 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:48:52.063: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Apr 3 13:48:52.154: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3654,SelfLink:/api/v1/namespaces/watch-3654/configmaps/e2e-watch-test-resource-version,UID:30daaff1-3ad5-490b-9f34-abd54a35d831,ResourceVersion:3401136,Generation:0,CreationTimestamp:2020-04-03 13:48:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 3 13:48:52.154: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-3654,SelfLink:/api/v1/namespaces/watch-3654/configmaps/e2e-watch-test-resource-version,UID:30daaff1-3ad5-490b-9f34-abd54a35d831,ResourceVersion:3401137,Generation:0,CreationTimestamp:2020-04-03 13:48:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:48:52.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3654" for this suite. Apr 3 13:48:58.171: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:48:58.249: INFO: namespace watch-3654 deletion completed in 6.091872092s • [SLOW TEST:6.187 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:48:58.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Apr 3 13:49:02.872: INFO: Successfully updated pod "annotationupdate77b6fb5b-e16a-4418-aaf9-fc2d693e0ac8" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:49:04.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3645" for this suite. Apr 3 13:49:26.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:49:27.009: INFO: namespace downward-api-3645 deletion completed in 22.114172111s • [SLOW TEST:28.759 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:49:27.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:49:27.099: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8843" for this suite. Apr 3 13:49:47.158: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:49:47.238: INFO: namespace pods-8843 deletion completed in 20.123793018s • [SLOW TEST:20.229 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:49:47.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 3 13:49:47.311: INFO: Waiting up to 5m0s for pod "pod-c1b67d48-dbc8-47df-b404-da024b5b23ed" in namespace "emptydir-797" to be "success or failure" Apr 3 13:49:47.318: INFO: Pod "pod-c1b67d48-dbc8-47df-b404-da024b5b23ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.957553ms Apr 3 13:49:49.330: INFO: Pod "pod-c1b67d48-dbc8-47df-b404-da024b5b23ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018746029s Apr 3 13:49:51.334: INFO: Pod "pod-c1b67d48-dbc8-47df-b404-da024b5b23ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02302976s STEP: Saw pod success Apr 3 13:49:51.334: INFO: Pod "pod-c1b67d48-dbc8-47df-b404-da024b5b23ed" satisfied condition "success or failure" Apr 3 13:49:51.338: INFO: Trying to get logs from node iruya-worker2 pod pod-c1b67d48-dbc8-47df-b404-da024b5b23ed container test-container: STEP: delete the pod Apr 3 13:49:51.356: INFO: Waiting for pod pod-c1b67d48-dbc8-47df-b404-da024b5b23ed to disappear Apr 3 13:49:51.377: INFO: Pod pod-c1b67d48-dbc8-47df-b404-da024b5b23ed no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:49:51.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-797" for this suite. Apr 3 13:49:57.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:49:57.478: INFO: namespace emptydir-797 deletion completed in 6.098100953s • [SLOW TEST:10.239 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:49:57.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-ea745dd3-c065-4da2-88b0-70ffaefcc9da STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-ea745dd3-c065-4da2-88b0-70ffaefcc9da STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:50:03.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5910" for this suite. Apr 3 13:50:25.614: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:50:25.694: INFO: namespace projected-5910 deletion completed in 22.095277962s • [SLOW TEST:28.216 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:50:25.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Apr 3 13:50:25.806: INFO: PodSpec: initContainers in spec.initContainers Apr 3 13:51:16.766: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-d6b34ed7-3401-4b49-b259-c7dbb75e42b2", GenerateName:"", Namespace:"init-container-4306", SelfLink:"/api/v1/namespaces/init-container-4306/pods/pod-init-d6b34ed7-3401-4b49-b259-c7dbb75e42b2", UID:"2b3bb6c7-ee04-4160-98ff-c42aec5d23c8", ResourceVersion:"3401549", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63721518625, loc:(*time.Location)(0x7ea78c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"806519738"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vl8df", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002803680), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vl8df", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vl8df", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vl8df", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0020f60d8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0027d9b60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020f6160)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0020f6180)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0020f6188), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0020f618c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721518625, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721518625, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721518625, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721518625, loc:(*time.Location)(0x7ea78c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.6", PodIP:"10.244.2.137", StartTime:(*v1.Time)(0xc002e0c640), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00220c9a0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc00220ca10)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://60c50eddcb0f508bc21dc385fc06a8601cde8059324516cb4f5e395f322d8752"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e0c8c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc002e0c660), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:51:16.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4306" for this suite. Apr 3 13:51:38.831: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:51:38.910: INFO: namespace init-container-4306 deletion completed in 22.139111662s • [SLOW TEST:73.216 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:51:38.911: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-cfe8c695-568f-4541-90ce-0c21d42af9e7 Apr 3 13:51:38.991: INFO: Pod name my-hostname-basic-cfe8c695-568f-4541-90ce-0c21d42af9e7: Found 0 pods out of 1 Apr 3 13:51:43.995: INFO: Pod name my-hostname-basic-cfe8c695-568f-4541-90ce-0c21d42af9e7: Found 1 pods out of 1 Apr 3 13:51:43.996: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-cfe8c695-568f-4541-90ce-0c21d42af9e7" are running Apr 3 13:51:43.999: INFO: Pod "my-hostname-basic-cfe8c695-568f-4541-90ce-0c21d42af9e7-n2xxd" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 13:51:39 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 13:51:41 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 13:51:41 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-04-03 13:51:38 +0000 UTC Reason: Message:}]) Apr 3 13:51:43.999: INFO: Trying to dial the pod Apr 3 13:51:49.014: INFO: Controller my-hostname-basic-cfe8c695-568f-4541-90ce-0c21d42af9e7: Got expected result from replica 1 [my-hostname-basic-cfe8c695-568f-4541-90ce-0c21d42af9e7-n2xxd]: "my-hostname-basic-cfe8c695-568f-4541-90ce-0c21d42af9e7-n2xxd", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:51:49.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4883" for this suite. Apr 3 13:51:55.056: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:51:55.140: INFO: namespace replication-controller-4883 deletion completed in 6.123182119s • [SLOW TEST:16.230 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:51:55.141: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-ea457260-ce80-4363-a6f0-12e7cd9f1e57 STEP: Creating a pod to test consume configMaps Apr 3 13:51:55.237: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5680f79a-ad5d-440d-b374-fc3dcac9fe39" in namespace "projected-3248" to be "success or failure" Apr 3 13:51:55.241: INFO: Pod "pod-projected-configmaps-5680f79a-ad5d-440d-b374-fc3dcac9fe39": Phase="Pending", Reason="", readiness=false. Elapsed: 3.941512ms Apr 3 13:51:57.246: INFO: Pod "pod-projected-configmaps-5680f79a-ad5d-440d-b374-fc3dcac9fe39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008353059s Apr 3 13:51:59.250: INFO: Pod "pod-projected-configmaps-5680f79a-ad5d-440d-b374-fc3dcac9fe39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012546745s STEP: Saw pod success Apr 3 13:51:59.250: INFO: Pod "pod-projected-configmaps-5680f79a-ad5d-440d-b374-fc3dcac9fe39" satisfied condition "success or failure" Apr 3 13:51:59.252: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-5680f79a-ad5d-440d-b374-fc3dcac9fe39 container projected-configmap-volume-test: STEP: delete the pod Apr 3 13:51:59.290: INFO: Waiting for pod pod-projected-configmaps-5680f79a-ad5d-440d-b374-fc3dcac9fe39 to disappear Apr 3 13:51:59.307: INFO: Pod pod-projected-configmaps-5680f79a-ad5d-440d-b374-fc3dcac9fe39 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:51:59.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3248" for this suite. Apr 3 13:52:05.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:52:05.405: INFO: namespace projected-3248 deletion completed in 6.09313249s • [SLOW TEST:10.264 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:52:05.405: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-2cef9cde-a305-4657-a695-ee27b6cc3651 STEP: Creating a pod to test consume configMaps Apr 3 13:52:05.518: INFO: Waiting up to 5m0s for pod "pod-configmaps-7b808581-53f6-4c71-a88d-bf389ad77b97" in namespace "configmap-2709" to be "success or failure" Apr 3 13:52:05.537: INFO: Pod "pod-configmaps-7b808581-53f6-4c71-a88d-bf389ad77b97": Phase="Pending", Reason="", readiness=false. Elapsed: 18.719804ms Apr 3 13:52:07.558: INFO: Pod "pod-configmaps-7b808581-53f6-4c71-a88d-bf389ad77b97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039033198s Apr 3 13:52:09.561: INFO: Pod "pod-configmaps-7b808581-53f6-4c71-a88d-bf389ad77b97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042965833s STEP: Saw pod success Apr 3 13:52:09.561: INFO: Pod "pod-configmaps-7b808581-53f6-4c71-a88d-bf389ad77b97" satisfied condition "success or failure" Apr 3 13:52:09.564: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-7b808581-53f6-4c71-a88d-bf389ad77b97 container configmap-volume-test: STEP: delete the pod Apr 3 13:52:09.623: INFO: Waiting for pod pod-configmaps-7b808581-53f6-4c71-a88d-bf389ad77b97 to disappear Apr 3 13:52:09.626: INFO: Pod pod-configmaps-7b808581-53f6-4c71-a88d-bf389ad77b97 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:52:09.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2709" for this suite. Apr 3 13:52:15.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:52:15.725: INFO: namespace configmap-2709 deletion completed in 6.095894498s • [SLOW TEST:10.320 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:52:15.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-7796 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet Apr 3 13:52:15.846: INFO: Found 0 stateful pods, waiting for 3 Apr 3 13:52:25.852: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 3 13:52:25.852: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 3 13:52:25.852: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false Apr 3 13:52:35.863: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Apr 3 13:52:35.863: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Apr 3 13:52:35.863: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Apr 3 13:52:35.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7796 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 3 13:52:38.529: INFO: stderr: "I0403 13:52:38.411161 1808 log.go:172] (0xc00012ae70) (0xc0005f0780) Create stream\nI0403 13:52:38.411198 1808 log.go:172] (0xc00012ae70) (0xc0005f0780) Stream added, broadcasting: 1\nI0403 13:52:38.413455 1808 log.go:172] (0xc00012ae70) Reply frame received for 1\nI0403 13:52:38.413496 1808 log.go:172] (0xc00012ae70) (0xc000532000) Create stream\nI0403 13:52:38.413504 1808 log.go:172] (0xc00012ae70) (0xc000532000) Stream added, broadcasting: 3\nI0403 13:52:38.414219 1808 log.go:172] (0xc00012ae70) Reply frame received for 3\nI0403 13:52:38.414259 1808 log.go:172] (0xc00012ae70) (0xc0005e8000) Create stream\nI0403 13:52:38.414269 1808 log.go:172] (0xc00012ae70) (0xc0005e8000) Stream added, broadcasting: 5\nI0403 13:52:38.415008 1808 log.go:172] (0xc00012ae70) Reply frame received for 5\nI0403 13:52:38.472559 1808 log.go:172] (0xc00012ae70) Data frame received for 5\nI0403 13:52:38.472601 1808 log.go:172] (0xc0005e8000) (5) Data frame handling\nI0403 13:52:38.472619 1808 log.go:172] (0xc0005e8000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0403 13:52:38.521702 1808 log.go:172] (0xc00012ae70) Data frame received for 3\nI0403 13:52:38.521725 1808 log.go:172] (0xc000532000) (3) Data frame handling\nI0403 13:52:38.521735 1808 log.go:172] (0xc000532000) (3) Data frame sent\nI0403 13:52:38.521994 1808 log.go:172] (0xc00012ae70) Data frame received for 5\nI0403 13:52:38.522025 1808 log.go:172] (0xc0005e8000) (5) Data frame handling\nI0403 13:52:38.522066 1808 log.go:172] (0xc00012ae70) Data frame received for 3\nI0403 13:52:38.522086 1808 log.go:172] (0xc000532000) (3) Data frame handling\nI0403 13:52:38.524045 1808 log.go:172] (0xc00012ae70) Data frame received for 1\nI0403 13:52:38.524063 1808 log.go:172] (0xc0005f0780) (1) Data frame handling\nI0403 13:52:38.524082 1808 log.go:172] (0xc0005f0780) (1) Data frame sent\nI0403 13:52:38.524098 1808 log.go:172] (0xc00012ae70) (0xc0005f0780) Stream removed, broadcasting: 1\nI0403 13:52:38.524274 1808 log.go:172] (0xc00012ae70) Go away received\nI0403 13:52:38.524424 1808 log.go:172] (0xc00012ae70) (0xc0005f0780) Stream removed, broadcasting: 1\nI0403 13:52:38.524436 1808 log.go:172] (0xc00012ae70) (0xc000532000) Stream removed, broadcasting: 3\nI0403 13:52:38.524442 1808 log.go:172] (0xc00012ae70) (0xc0005e8000) Stream removed, broadcasting: 5\n" Apr 3 13:52:38.530: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 3 13:52:38.530: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine Apr 3 13:52:48.562: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Apr 3 13:52:58.601: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7796 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 13:52:58.825: INFO: stderr: "I0403 13:52:58.729949 1838 log.go:172] (0xc000a44370) (0xc0009d26e0) Create stream\nI0403 13:52:58.730015 1838 log.go:172] (0xc000a44370) (0xc0009d26e0) Stream added, broadcasting: 1\nI0403 13:52:58.733047 1838 log.go:172] (0xc000a44370) Reply frame received for 1\nI0403 13:52:58.733091 1838 log.go:172] (0xc000a44370) (0xc0009d2780) Create stream\nI0403 13:52:58.733246 1838 log.go:172] (0xc000a44370) (0xc0009d2780) Stream added, broadcasting: 3\nI0403 13:52:58.734058 1838 log.go:172] (0xc000a44370) Reply frame received for 3\nI0403 13:52:58.734089 1838 log.go:172] (0xc000a44370) (0xc0008cc000) Create stream\nI0403 13:52:58.734099 1838 log.go:172] (0xc000a44370) (0xc0008cc000) Stream added, broadcasting: 5\nI0403 13:52:58.735046 1838 log.go:172] (0xc000a44370) Reply frame received for 5\nI0403 13:52:58.817812 1838 log.go:172] (0xc000a44370) Data frame received for 5\nI0403 13:52:58.817839 1838 log.go:172] (0xc0008cc000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0403 13:52:58.817868 1838 log.go:172] (0xc000a44370) Data frame received for 3\nI0403 13:52:58.817901 1838 log.go:172] (0xc0009d2780) (3) Data frame handling\nI0403 13:52:58.817921 1838 log.go:172] (0xc0009d2780) (3) Data frame sent\nI0403 13:52:58.817942 1838 log.go:172] (0xc000a44370) Data frame received for 3\nI0403 13:52:58.817961 1838 log.go:172] (0xc0009d2780) (3) Data frame handling\nI0403 13:52:58.818007 1838 log.go:172] (0xc0008cc000) (5) Data frame sent\nI0403 13:52:58.818024 1838 log.go:172] (0xc000a44370) Data frame received for 5\nI0403 13:52:58.818034 1838 log.go:172] (0xc0008cc000) (5) Data frame handling\nI0403 13:52:58.819628 1838 log.go:172] (0xc000a44370) Data frame received for 1\nI0403 13:52:58.819662 1838 log.go:172] (0xc0009d26e0) (1) Data frame handling\nI0403 13:52:58.819679 1838 log.go:172] (0xc0009d26e0) (1) Data frame sent\nI0403 13:52:58.819693 1838 log.go:172] (0xc000a44370) (0xc0009d26e0) Stream removed, broadcasting: 1\nI0403 13:52:58.819711 1838 log.go:172] (0xc000a44370) Go away received\nI0403 13:52:58.820173 1838 log.go:172] (0xc000a44370) (0xc0009d26e0) Stream removed, broadcasting: 1\nI0403 13:52:58.820196 1838 log.go:172] (0xc000a44370) (0xc0009d2780) Stream removed, broadcasting: 3\nI0403 13:52:58.820212 1838 log.go:172] (0xc000a44370) (0xc0008cc000) Stream removed, broadcasting: 5\n" Apr 3 13:52:58.825: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 3 13:52:58.825: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 3 13:53:08.847: INFO: Waiting for StatefulSet statefulset-7796/ss2 to complete update Apr 3 13:53:08.847: INFO: Waiting for Pod statefulset-7796/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 3 13:53:08.847: INFO: Waiting for Pod statefulset-7796/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 3 13:53:08.847: INFO: Waiting for Pod statefulset-7796/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 3 13:53:18.855: INFO: Waiting for StatefulSet statefulset-7796/ss2 to complete update Apr 3 13:53:18.856: INFO: Waiting for Pod statefulset-7796/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 3 13:53:18.856: INFO: Waiting for Pod statefulset-7796/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c Apr 3 13:53:28.854: INFO: Waiting for StatefulSet statefulset-7796/ss2 to complete update Apr 3 13:53:28.854: INFO: Waiting for Pod statefulset-7796/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Rolling back to a previous revision Apr 3 13:53:38.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7796 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 3 13:53:39.151: INFO: stderr: "I0403 13:53:39.022555 1860 log.go:172] (0xc00013afd0) (0xc0005aeaa0) Create stream\nI0403 13:53:39.022623 1860 log.go:172] (0xc00013afd0) (0xc0005aeaa0) Stream added, broadcasting: 1\nI0403 13:53:39.025583 1860 log.go:172] (0xc00013afd0) Reply frame received for 1\nI0403 13:53:39.025632 1860 log.go:172] (0xc00013afd0) (0xc000a5e000) Create stream\nI0403 13:53:39.025653 1860 log.go:172] (0xc00013afd0) (0xc000a5e000) Stream added, broadcasting: 3\nI0403 13:53:39.026714 1860 log.go:172] (0xc00013afd0) Reply frame received for 3\nI0403 13:53:39.026752 1860 log.go:172] (0xc00013afd0) (0xc00074a000) Create stream\nI0403 13:53:39.026767 1860 log.go:172] (0xc00013afd0) (0xc00074a000) Stream added, broadcasting: 5\nI0403 13:53:39.027623 1860 log.go:172] (0xc00013afd0) Reply frame received for 5\nI0403 13:53:39.112262 1860 log.go:172] (0xc00013afd0) Data frame received for 5\nI0403 13:53:39.112300 1860 log.go:172] (0xc00074a000) (5) Data frame handling\nI0403 13:53:39.112323 1860 log.go:172] (0xc00074a000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0403 13:53:39.138045 1860 log.go:172] (0xc00013afd0) Data frame received for 3\nI0403 13:53:39.138091 1860 log.go:172] (0xc000a5e000) (3) Data frame handling\nI0403 13:53:39.138129 1860 log.go:172] (0xc000a5e000) (3) Data frame sent\nI0403 13:53:39.138339 1860 log.go:172] (0xc00013afd0) Data frame received for 3\nI0403 13:53:39.138385 1860 log.go:172] (0xc000a5e000) (3) Data frame handling\nI0403 13:53:39.138468 1860 log.go:172] (0xc00013afd0) Data frame received for 5\nI0403 13:53:39.138508 1860 log.go:172] (0xc00074a000) (5) Data frame handling\nI0403 13:53:39.142116 1860 log.go:172] (0xc00013afd0) Data frame received for 1\nI0403 13:53:39.142154 1860 log.go:172] (0xc0005aeaa0) (1) Data frame handling\nI0403 13:53:39.142188 1860 log.go:172] (0xc0005aeaa0) (1) Data frame sent\nI0403 13:53:39.142210 1860 log.go:172] (0xc00013afd0) (0xc0005aeaa0) Stream removed, broadcasting: 1\nI0403 13:53:39.145774 1860 log.go:172] (0xc00013afd0) Go away received\nI0403 13:53:39.146116 1860 log.go:172] (0xc00013afd0) (0xc0005aeaa0) Stream removed, broadcasting: 1\nI0403 13:53:39.146209 1860 log.go:172] (0xc00013afd0) (0xc000a5e000) Stream removed, broadcasting: 3\nI0403 13:53:39.146288 1860 log.go:172] (0xc00013afd0) (0xc00074a000) Stream removed, broadcasting: 5\n" Apr 3 13:53:39.151: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 3 13:53:39.151: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 3 13:53:49.191: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Apr 3 13:53:59.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7796 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 13:53:59.475: INFO: stderr: "I0403 13:53:59.385637 1881 log.go:172] (0xc00073e580) (0xc0009b48c0) Create stream\nI0403 13:53:59.385686 1881 log.go:172] (0xc00073e580) (0xc0009b48c0) Stream added, broadcasting: 1\nI0403 13:53:59.388992 1881 log.go:172] (0xc00073e580) Reply frame received for 1\nI0403 13:53:59.389023 1881 log.go:172] (0xc00073e580) (0xc0009b4000) Create stream\nI0403 13:53:59.389035 1881 log.go:172] (0xc00073e580) (0xc0009b4000) Stream added, broadcasting: 3\nI0403 13:53:59.389931 1881 log.go:172] (0xc00073e580) Reply frame received for 3\nI0403 13:53:59.389963 1881 log.go:172] (0xc00073e580) (0xc000868000) Create stream\nI0403 13:53:59.389974 1881 log.go:172] (0xc00073e580) (0xc000868000) Stream added, broadcasting: 5\nI0403 13:53:59.390710 1881 log.go:172] (0xc00073e580) Reply frame received for 5\nI0403 13:53:59.468743 1881 log.go:172] (0xc00073e580) Data frame received for 5\nI0403 13:53:59.468788 1881 log.go:172] (0xc000868000) (5) Data frame handling\nI0403 13:53:59.468808 1881 log.go:172] (0xc000868000) (5) Data frame sent\nI0403 13:53:59.468822 1881 log.go:172] (0xc00073e580) Data frame received for 5\nI0403 13:53:59.468836 1881 log.go:172] (0xc000868000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0403 13:53:59.468896 1881 log.go:172] (0xc00073e580) Data frame received for 3\nI0403 13:53:59.468960 1881 log.go:172] (0xc0009b4000) (3) Data frame handling\nI0403 13:53:59.468983 1881 log.go:172] (0xc0009b4000) (3) Data frame sent\nI0403 13:53:59.469000 1881 log.go:172] (0xc00073e580) Data frame received for 3\nI0403 13:53:59.469015 1881 log.go:172] (0xc0009b4000) (3) Data frame handling\nI0403 13:53:59.470041 1881 log.go:172] (0xc00073e580) Data frame received for 1\nI0403 13:53:59.470067 1881 log.go:172] (0xc0009b48c0) (1) Data frame handling\nI0403 13:53:59.470081 1881 log.go:172] (0xc0009b48c0) (1) Data frame sent\nI0403 13:53:59.470096 1881 log.go:172] (0xc00073e580) (0xc0009b48c0) Stream removed, broadcasting: 1\nI0403 13:53:59.470338 1881 log.go:172] (0xc00073e580) Go away received\nI0403 13:53:59.470361 1881 log.go:172] (0xc00073e580) (0xc0009b48c0) Stream removed, broadcasting: 1\nI0403 13:53:59.470378 1881 log.go:172] (0xc00073e580) (0xc0009b4000) Stream removed, broadcasting: 3\nI0403 13:53:59.470386 1881 log.go:172] (0xc00073e580) (0xc000868000) Stream removed, broadcasting: 5\n" Apr 3 13:53:59.475: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 3 13:53:59.475: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 3 13:54:09.520: INFO: Waiting for StatefulSet statefulset-7796/ss2 to complete update Apr 3 13:54:09.520: INFO: Waiting for Pod statefulset-7796/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 3 13:54:09.520: INFO: Waiting for Pod statefulset-7796/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 3 13:54:09.520: INFO: Waiting for Pod statefulset-7796/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 3 13:54:19.528: INFO: Waiting for StatefulSet statefulset-7796/ss2 to complete update Apr 3 13:54:19.528: INFO: Waiting for Pod statefulset-7796/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd Apr 3 13:54:29.540: INFO: Waiting for StatefulSet statefulset-7796/ss2 to complete update Apr 3 13:54:29.541: INFO: Waiting for Pod statefulset-7796/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 3 13:54:39.530: INFO: Deleting all statefulset in ns statefulset-7796 Apr 3 13:54:39.533: INFO: Scaling statefulset ss2 to 0 Apr 3 13:55:09.603: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 13:55:09.607: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:55:09.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7796" for this suite. Apr 3 13:55:15.634: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:55:15.716: INFO: namespace statefulset-7796 deletion completed in 6.091704515s • [SLOW TEST:179.990 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:55:15.718: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-3c7440fc-a267-46af-9082-8461d6ed3304 STEP: Creating a pod to test consume configMaps Apr 3 13:55:15.812: INFO: Waiting up to 5m0s for pod "pod-configmaps-100c360d-9bab-4587-ae59-007aedf83bbe" in namespace "configmap-4972" to be "success or failure" Apr 3 13:55:15.834: INFO: Pod "pod-configmaps-100c360d-9bab-4587-ae59-007aedf83bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 21.712653ms Apr 3 13:55:17.838: INFO: Pod "pod-configmaps-100c360d-9bab-4587-ae59-007aedf83bbe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026116886s Apr 3 13:55:19.842: INFO: Pod "pod-configmaps-100c360d-9bab-4587-ae59-007aedf83bbe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029956832s STEP: Saw pod success Apr 3 13:55:19.842: INFO: Pod "pod-configmaps-100c360d-9bab-4587-ae59-007aedf83bbe" satisfied condition "success or failure" Apr 3 13:55:19.845: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-100c360d-9bab-4587-ae59-007aedf83bbe container configmap-volume-test: STEP: delete the pod Apr 3 13:55:19.878: INFO: Waiting for pod pod-configmaps-100c360d-9bab-4587-ae59-007aedf83bbe to disappear Apr 3 13:55:19.891: INFO: Pod pod-configmaps-100c360d-9bab-4587-ae59-007aedf83bbe no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:55:19.891: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4972" for this suite. Apr 3 13:55:25.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:55:25.999: INFO: namespace configmap-4972 deletion completed in 6.104157716s • [SLOW TEST:10.282 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:55:26.000: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-a5866163-e361-4ac1-abf9-f850244bf47d STEP: Creating a pod to test consume configMaps Apr 3 13:55:26.068: INFO: Waiting up to 5m0s for pod "pod-configmaps-b1182113-0bf1-4a8f-80bc-49b803d26b33" in namespace "configmap-1251" to be "success or failure" Apr 3 13:55:26.077: INFO: Pod "pod-configmaps-b1182113-0bf1-4a8f-80bc-49b803d26b33": Phase="Pending", Reason="", readiness=false. Elapsed: 9.755109ms Apr 3 13:55:28.082: INFO: Pod "pod-configmaps-b1182113-0bf1-4a8f-80bc-49b803d26b33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014453197s Apr 3 13:55:30.087: INFO: Pod "pod-configmaps-b1182113-0bf1-4a8f-80bc-49b803d26b33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018871814s STEP: Saw pod success Apr 3 13:55:30.087: INFO: Pod "pod-configmaps-b1182113-0bf1-4a8f-80bc-49b803d26b33" satisfied condition "success or failure" Apr 3 13:55:30.090: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-b1182113-0bf1-4a8f-80bc-49b803d26b33 container configmap-volume-test: STEP: delete the pod Apr 3 13:55:30.128: INFO: Waiting for pod pod-configmaps-b1182113-0bf1-4a8f-80bc-49b803d26b33 to disappear Apr 3 13:55:30.136: INFO: Pod pod-configmaps-b1182113-0bf1-4a8f-80bc-49b803d26b33 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:55:30.137: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1251" for this suite. Apr 3 13:55:36.174: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:55:36.277: INFO: namespace configmap-1251 deletion completed in 6.137115936s • [SLOW TEST:10.278 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:55:36.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args Apr 3 13:55:36.371: INFO: Waiting up to 5m0s for pod "var-expansion-1bc76ed4-0304-41e4-83d7-fd20e1e43222" in namespace "var-expansion-9376" to be "success or failure" Apr 3 13:55:36.379: INFO: Pod "var-expansion-1bc76ed4-0304-41e4-83d7-fd20e1e43222": Phase="Pending", Reason="", readiness=false. Elapsed: 8.622667ms Apr 3 13:55:38.383: INFO: Pod "var-expansion-1bc76ed4-0304-41e4-83d7-fd20e1e43222": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012630472s Apr 3 13:55:40.387: INFO: Pod "var-expansion-1bc76ed4-0304-41e4-83d7-fd20e1e43222": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016598619s STEP: Saw pod success Apr 3 13:55:40.387: INFO: Pod "var-expansion-1bc76ed4-0304-41e4-83d7-fd20e1e43222" satisfied condition "success or failure" Apr 3 13:55:40.391: INFO: Trying to get logs from node iruya-worker pod var-expansion-1bc76ed4-0304-41e4-83d7-fd20e1e43222 container dapi-container: STEP: delete the pod Apr 3 13:55:40.418: INFO: Waiting for pod var-expansion-1bc76ed4-0304-41e4-83d7-fd20e1e43222 to disappear Apr 3 13:55:40.421: INFO: Pod var-expansion-1bc76ed4-0304-41e4-83d7-fd20e1e43222 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:55:40.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-9376" for this suite. Apr 3 13:55:46.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:55:46.504: INFO: namespace var-expansion-9376 deletion completed in 6.080052171s • [SLOW TEST:10.226 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:55:46.504: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-hp6c STEP: Creating a pod to test atomic-volume-subpath Apr 3 13:55:46.598: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-hp6c" in namespace "subpath-6451" to be "success or failure" Apr 3 13:55:46.612: INFO: Pod "pod-subpath-test-downwardapi-hp6c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.344274ms Apr 3 13:55:48.618: INFO: Pod "pod-subpath-test-downwardapi-hp6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020028946s Apr 3 13:55:50.623: INFO: Pod "pod-subpath-test-downwardapi-hp6c": Phase="Running", Reason="", readiness=true. Elapsed: 4.025683867s Apr 3 13:55:52.628: INFO: Pod "pod-subpath-test-downwardapi-hp6c": Phase="Running", Reason="", readiness=true. Elapsed: 6.030196639s Apr 3 13:55:54.632: INFO: Pod "pod-subpath-test-downwardapi-hp6c": Phase="Running", Reason="", readiness=true. Elapsed: 8.03446495s Apr 3 13:55:56.636: INFO: Pod "pod-subpath-test-downwardapi-hp6c": Phase="Running", Reason="", readiness=true. Elapsed: 10.038826574s Apr 3 13:55:58.641: INFO: Pod "pod-subpath-test-downwardapi-hp6c": Phase="Running", Reason="", readiness=true. Elapsed: 12.043415049s Apr 3 13:56:00.645: INFO: Pod "pod-subpath-test-downwardapi-hp6c": Phase="Running", Reason="", readiness=true. Elapsed: 14.047191591s Apr 3 13:56:02.650: INFO: Pod "pod-subpath-test-downwardapi-hp6c": Phase="Running", Reason="", readiness=true. Elapsed: 16.051950778s Apr 3 13:56:04.654: INFO: Pod "pod-subpath-test-downwardapi-hp6c": Phase="Running", Reason="", readiness=true. Elapsed: 18.056368323s Apr 3 13:56:06.659: INFO: Pod "pod-subpath-test-downwardapi-hp6c": Phase="Running", Reason="", readiness=true. Elapsed: 20.060880114s Apr 3 13:56:08.663: INFO: Pod "pod-subpath-test-downwardapi-hp6c": Phase="Running", Reason="", readiness=true. Elapsed: 22.064977503s Apr 3 13:56:10.667: INFO: Pod "pod-subpath-test-downwardapi-hp6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.069190201s STEP: Saw pod success Apr 3 13:56:10.667: INFO: Pod "pod-subpath-test-downwardapi-hp6c" satisfied condition "success or failure" Apr 3 13:56:10.670: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-downwardapi-hp6c container test-container-subpath-downwardapi-hp6c: STEP: delete the pod Apr 3 13:56:10.689: INFO: Waiting for pod pod-subpath-test-downwardapi-hp6c to disappear Apr 3 13:56:10.694: INFO: Pod pod-subpath-test-downwardapi-hp6c no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-hp6c Apr 3 13:56:10.694: INFO: Deleting pod "pod-subpath-test-downwardapi-hp6c" in namespace "subpath-6451" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:56:10.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6451" for this suite. Apr 3 13:56:16.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:56:16.789: INFO: namespace subpath-6451 deletion completed in 6.088848865s • [SLOW TEST:30.285 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:56:16.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-76.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-76.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 13:56:22.961: INFO: DNS probes using dns-76/dns-test-3357ebb9-0142-40d2-aede-6ee9c2b5a1e8 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 13:56:23.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-76" for this suite. Apr 3 13:56:29.126: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 13:56:29.207: INFO: namespace dns-76 deletion completed in 6.14289001s • [SLOW TEST:12.417 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 13:56:29.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-591fb31b-53b1-4f18-ab55-26e338120147 in namespace container-probe-2021 Apr 3 13:56:33.271: INFO: Started pod test-webserver-591fb31b-53b1-4f18-ab55-26e338120147 in namespace container-probe-2021 STEP: checking the pod's current state and verifying that restartCount is present Apr 3 13:56:33.274: INFO: Initial restart count of pod test-webserver-591fb31b-53b1-4f18-ab55-26e338120147 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:00:33.845: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2021" for this suite. Apr 3 14:00:39.892: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:00:39.965: INFO: namespace container-probe-2021 deletion completed in 6.112402562s • [SLOW TEST:250.759 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:00:39.966: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 14:00:40.039: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88a05c1a-f4e1-4f3b-bdc7-864a3ad8bdfe" in namespace "projected-3971" to be "success or failure" Apr 3 14:00:40.044: INFO: Pod "downwardapi-volume-88a05c1a-f4e1-4f3b-bdc7-864a3ad8bdfe": Phase="Pending", Reason="", readiness=false. Elapsed: 4.878292ms Apr 3 14:00:42.085: INFO: Pod "downwardapi-volume-88a05c1a-f4e1-4f3b-bdc7-864a3ad8bdfe": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045888295s Apr 3 14:00:44.090: INFO: Pod "downwardapi-volume-88a05c1a-f4e1-4f3b-bdc7-864a3ad8bdfe": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050519623s STEP: Saw pod success Apr 3 14:00:44.090: INFO: Pod "downwardapi-volume-88a05c1a-f4e1-4f3b-bdc7-864a3ad8bdfe" satisfied condition "success or failure" Apr 3 14:00:44.093: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-88a05c1a-f4e1-4f3b-bdc7-864a3ad8bdfe container client-container: STEP: delete the pod Apr 3 14:00:44.117: INFO: Waiting for pod downwardapi-volume-88a05c1a-f4e1-4f3b-bdc7-864a3ad8bdfe to disappear Apr 3 14:00:44.122: INFO: Pod downwardapi-volume-88a05c1a-f4e1-4f3b-bdc7-864a3ad8bdfe no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:00:44.122: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3971" for this suite. Apr 3 14:00:50.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:00:50.232: INFO: namespace projected-3971 deletion completed in 6.106877148s • [SLOW TEST:10.266 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:00:50.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:01:21.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7513" for this suite. Apr 3 14:01:27.941: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:01:28.021: INFO: namespace container-runtime-7513 deletion completed in 6.09222906s • [SLOW TEST:37.789 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:01:28.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-ede6fe6a-b7ae-4c9e-9902-e11eb770ea3b in namespace container-probe-4868 Apr 3 14:01:32.099: INFO: Started pod liveness-ede6fe6a-b7ae-4c9e-9902-e11eb770ea3b in namespace container-probe-4868 STEP: checking the pod's current state and verifying that restartCount is present Apr 3 14:01:32.102: INFO: Initial restart count of pod liveness-ede6fe6a-b7ae-4c9e-9902-e11eb770ea3b is 0 Apr 3 14:01:54.159: INFO: Restart count of pod container-probe-4868/liveness-ede6fe6a-b7ae-4c9e-9902-e11eb770ea3b is now 1 (22.056636121s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:01:54.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4868" for this suite. Apr 3 14:02:00.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:02:00.309: INFO: namespace container-probe-4868 deletion completed in 6.100005985s • [SLOW TEST:32.287 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:02:00.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-c279dd78-a328-426d-9785-de1fbcfc1f47 STEP: Creating secret with name s-test-opt-upd-cc027364-8e64-4084-897c-383c777a8125 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-c279dd78-a328-426d-9785-de1fbcfc1f47 STEP: Updating secret s-test-opt-upd-cc027364-8e64-4084-897c-383c777a8125 STEP: Creating secret with name s-test-opt-create-53008890-c590-478a-aa58-b2e04ffb89d4 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:02:10.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2699" for this suite. Apr 3 14:02:32.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:02:32.639: INFO: namespace projected-2699 deletion completed in 22.095876007s • [SLOW TEST:32.330 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:02:32.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted Apr 3 14:02:39.875: INFO: 0 pods remaining Apr 3 14:02:39.875: INFO: 0 pods has nil DeletionTimestamp Apr 3 14:02:39.875: INFO: Apr 3 14:02:40.527: INFO: 0 pods remaining Apr 3 14:02:40.527: INFO: 0 pods has nil DeletionTimestamp Apr 3 14:02:40.527: INFO: STEP: Gathering metrics W0403 14:02:41.263848 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 3 14:02:41.264: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:02:41.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4933" for this suite. Apr 3 14:02:47.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:02:47.534: INFO: namespace gc-4933 deletion completed in 6.267511362s • [SLOW TEST:14.895 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:02:47.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod Apr 3 14:02:47.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4302' Apr 3 14:02:50.295: INFO: stderr: "" Apr 3 14:02:50.295: INFO: stdout: "pod/pause created\n" Apr 3 14:02:50.295: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Apr 3 14:02:50.295: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4302" to be "running and ready" Apr 3 14:02:50.298: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 3.117321ms Apr 3 14:02:52.303: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007552485s Apr 3 14:02:54.307: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.011687448s Apr 3 14:02:54.307: INFO: Pod "pause" satisfied condition "running and ready" Apr 3 14:02:54.307: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod Apr 3 14:02:54.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4302' Apr 3 14:02:54.426: INFO: stderr: "" Apr 3 14:02:54.426: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Apr 3 14:02:54.426: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4302' Apr 3 14:02:54.535: INFO: stderr: "" Apr 3 14:02:54.535: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod Apr 3 14:02:54.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4302' Apr 3 14:02:54.618: INFO: stderr: "" Apr 3 14:02:54.618: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Apr 3 14:02:54.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4302' Apr 3 14:02:54.699: INFO: stderr: "" Apr 3 14:02:54.699: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources Apr 3 14:02:54.700: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4302' Apr 3 14:02:54.806: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 14:02:54.806: INFO: stdout: "pod \"pause\" force deleted\n" Apr 3 14:02:54.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4302' Apr 3 14:02:54.895: INFO: stderr: "No resources found.\n" Apr 3 14:02:54.895: INFO: stdout: "" Apr 3 14:02:54.895: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4302 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 3 14:02:54.982: INFO: stderr: "" Apr 3 14:02:54.982: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:02:54.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4302" for this suite. Apr 3 14:03:01.066: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:03:01.148: INFO: namespace kubectl-4302 deletion completed in 6.16338956s • [SLOW TEST:13.613 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:03:01.149: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-ed0891fb-314e-4a8c-a59d-54871cd4017b STEP: Creating a pod to test consume configMaps Apr 3 14:03:01.224: INFO: Waiting up to 5m0s for pod "pod-configmaps-e2fd8866-784c-45bb-9d19-bf96b75efc43" in namespace "configmap-9691" to be "success or failure" Apr 3 14:03:01.233: INFO: Pod "pod-configmaps-e2fd8866-784c-45bb-9d19-bf96b75efc43": Phase="Pending", Reason="", readiness=false. Elapsed: 9.258463ms Apr 3 14:03:03.236: INFO: Pod "pod-configmaps-e2fd8866-784c-45bb-9d19-bf96b75efc43": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01258473s Apr 3 14:03:05.240: INFO: Pod "pod-configmaps-e2fd8866-784c-45bb-9d19-bf96b75efc43": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016514598s STEP: Saw pod success Apr 3 14:03:05.240: INFO: Pod "pod-configmaps-e2fd8866-784c-45bb-9d19-bf96b75efc43" satisfied condition "success or failure" Apr 3 14:03:05.243: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-e2fd8866-784c-45bb-9d19-bf96b75efc43 container configmap-volume-test: STEP: delete the pod Apr 3 14:03:05.385: INFO: Waiting for pod pod-configmaps-e2fd8866-784c-45bb-9d19-bf96b75efc43 to disappear Apr 3 14:03:05.486: INFO: Pod pod-configmaps-e2fd8866-784c-45bb-9d19-bf96b75efc43 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:03:05.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9691" for this suite. Apr 3 14:03:11.501: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:03:11.576: INFO: namespace configmap-9691 deletion completed in 6.086635449s • [SLOW TEST:10.428 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:03:11.578: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Apr 3 14:03:16.750: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:03:17.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-155" for this suite. Apr 3 14:03:39.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:03:39.859: INFO: namespace replicaset-155 deletion completed in 22.090357142s • [SLOW TEST:28.282 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:03:39.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions Apr 3 14:03:39.987: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' Apr 3 14:03:40.180: INFO: stderr: "" Apr 3 14:03:40.180: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:03:40.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7743" for this suite. Apr 3 14:03:46.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:03:46.334: INFO: namespace kubectl-7743 deletion completed in 6.122871664s • [SLOW TEST:6.474 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:03:46.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-d3667e4c-0d20-4588-bc8e-7bfff8de6cfe STEP: Creating secret with name s-test-opt-upd-25c0e24b-a10e-47e1-aef5-3bd8867b2531 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-d3667e4c-0d20-4588-bc8e-7bfff8de6cfe STEP: Updating secret s-test-opt-upd-25c0e24b-a10e-47e1-aef5-3bd8867b2531 STEP: Creating secret with name s-test-opt-create-7efa635e-e020-45d4-b923-e5b1eb4f3524 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:05:10.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7637" for this suite. Apr 3 14:05:32.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:05:32.983: INFO: namespace secrets-7637 deletion completed in 22.099128165s • [SLOW TEST:106.647 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:05:32.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-3989cd77-e4ff-4700-a7b8-0d74f04c33f2 STEP: Creating secret with name secret-projected-all-test-volume-6b26fae5-74b2-4529-8bcf-97d79f54a9ef STEP: Creating a pod to test Check all projections for projected volume plugin Apr 3 14:05:33.051: INFO: Waiting up to 5m0s for pod "projected-volume-882fc7c8-8cd8-4f5b-81c8-97f54d5d2261" in namespace "projected-8833" to be "success or failure" Apr 3 14:05:33.112: INFO: Pod "projected-volume-882fc7c8-8cd8-4f5b-81c8-97f54d5d2261": Phase="Pending", Reason="", readiness=false. Elapsed: 61.302218ms Apr 3 14:05:35.161: INFO: Pod "projected-volume-882fc7c8-8cd8-4f5b-81c8-97f54d5d2261": Phase="Pending", Reason="", readiness=false. Elapsed: 2.109401008s Apr 3 14:05:37.164: INFO: Pod "projected-volume-882fc7c8-8cd8-4f5b-81c8-97f54d5d2261": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.113215516s STEP: Saw pod success Apr 3 14:05:37.164: INFO: Pod "projected-volume-882fc7c8-8cd8-4f5b-81c8-97f54d5d2261" satisfied condition "success or failure" Apr 3 14:05:37.167: INFO: Trying to get logs from node iruya-worker2 pod projected-volume-882fc7c8-8cd8-4f5b-81c8-97f54d5d2261 container projected-all-volume-test: STEP: delete the pod Apr 3 14:05:37.202: INFO: Waiting for pod projected-volume-882fc7c8-8cd8-4f5b-81c8-97f54d5d2261 to disappear Apr 3 14:05:37.211: INFO: Pod projected-volume-882fc7c8-8cd8-4f5b-81c8-97f54d5d2261 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:05:37.211: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8833" for this suite. Apr 3 14:05:43.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:05:43.311: INFO: namespace projected-8833 deletion completed in 6.096482199s • [SLOW TEST:10.328 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:05:43.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 14:05:43.355: INFO: Creating deployment "nginx-deployment" Apr 3 14:05:43.361: INFO: Waiting for observed generation 1 Apr 3 14:05:45.371: INFO: Waiting for all required pods to come up Apr 3 14:05:45.375: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running Apr 3 14:05:53.390: INFO: Waiting for deployment "nginx-deployment" to complete Apr 3 14:05:53.395: INFO: Updating deployment "nginx-deployment" with a non-existent image Apr 3 14:05:53.399: INFO: Updating deployment nginx-deployment Apr 3 14:05:53.399: INFO: Waiting for observed generation 2 Apr 3 14:05:55.418: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Apr 3 14:05:55.420: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Apr 3 14:05:55.422: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 3 14:05:55.429: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Apr 3 14:05:55.429: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Apr 3 14:05:55.431: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas Apr 3 14:05:55.436: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas Apr 3 14:05:55.436: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 Apr 3 14:05:55.442: INFO: Updating deployment nginx-deployment Apr 3 14:05:55.442: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas Apr 3 14:05:55.453: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Apr 3 14:05:55.474: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 3 14:05:55.702: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-4397,SelfLink:/apis/apps/v1/namespaces/deployment-4397/deployments/nginx-deployment,UID:eee56c64-b5d8-4726-b317-d4508e4b6528,ResourceVersion:3404579,Generation:3,CreationTimestamp:2020-04-03 14:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-04-03 14:05:53 +0000 UTC 2020-04-03 14:05:43 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-04-03 14:05:55 +0000 UTC 2020-04-03 14:05:55 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},} Apr 3 14:05:55.841: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-4397,SelfLink:/apis/apps/v1/namespaces/deployment-4397/replicasets/nginx-deployment-55fb7cb77f,UID:0c1fa91f-5def-451e-b9a2-b8aeff6e4bf4,ResourceVersion:3404619,Generation:3,CreationTimestamp:2020-04-03 14:05:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment eee56c64-b5d8-4726-b317-d4508e4b6528 0xc002ca5b27 0xc002ca5b28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 3 14:05:55.841: INFO: All old ReplicaSets of Deployment "nginx-deployment": Apr 3 14:05:55.842: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-4397,SelfLink:/apis/apps/v1/namespaces/deployment-4397/replicasets/nginx-deployment-7b8c6f4498,UID:21a307bb-3bf2-4233-9b5a-d2b4e891a411,ResourceVersion:3404616,Generation:3,CreationTimestamp:2020-04-03 14:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment eee56c64-b5d8-4726-b317-d4508e4b6528 0xc002ca5c07 0xc002ca5c08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} Apr 3 14:05:55.904: INFO: Pod "nginx-deployment-55fb7cb77f-99jcm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-99jcm,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-55fb7cb77f-99jcm,UID:f30554e1-d5dc-4e5e-b986-e42deeda8020,ResourceVersion:3404554,Generation:0,CreationTimestamp:2020-04-03 14:05:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0c1fa91f-5def-451e-b9a2-b8aeff6e4bf4 0xc003365247 0xc003365248}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0033652e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003365300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-03 14:05:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.904: INFO: Pod "nginx-deployment-55fb7cb77f-blt2r" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-blt2r,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-55fb7cb77f-blt2r,UID:15293e33-9d67-46d7-bd68-6c3485f8d0d3,ResourceVersion:3404615,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0c1fa91f-5def-451e-b9a2-b8aeff6e4bf4 0xc0033653d0 0xc0033653d1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003365450} {node.kubernetes.io/unreachable Exists NoExecute 0xc003365470}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.904: INFO: Pod "nginx-deployment-55fb7cb77f-dpqb2" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-dpqb2,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-55fb7cb77f-dpqb2,UID:25c4ebf2-60d8-474d-a6b6-cdd87e7b8365,ResourceVersion:3404609,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0c1fa91f-5def-451e-b9a2-b8aeff6e4bf4 0xc0033654f7 0xc0033654f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003365570} {node.kubernetes.io/unreachable Exists NoExecute 0xc003365590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.904: INFO: Pod "nginx-deployment-55fb7cb77f-f5s48" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-f5s48,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-55fb7cb77f-f5s48,UID:329f11a3-afdd-447f-911c-1b57b5291d28,ResourceVersion:3404532,Generation:0,CreationTimestamp:2020-04-03 14:05:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0c1fa91f-5def-451e-b9a2-b8aeff6e4bf4 0xc003365617 0xc003365618}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003365690} {node.kubernetes.io/unreachable Exists NoExecute 0xc0033656b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-03 14:05:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.904: INFO: Pod "nginx-deployment-55fb7cb77f-f655v" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-f655v,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-55fb7cb77f-f655v,UID:f60c6b16-3be4-4d27-b4cf-b88065b128e1,ResourceVersion:3404606,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0c1fa91f-5def-451e-b9a2-b8aeff6e4bf4 0xc003365780 0xc003365781}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003365800} {node.kubernetes.io/unreachable Exists NoExecute 0xc003365820}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.905: INFO: Pod "nginx-deployment-55fb7cb77f-ntq2m" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ntq2m,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-55fb7cb77f-ntq2m,UID:d614fa56-33f6-42ab-a50a-abed893d8962,ResourceVersion:3404583,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0c1fa91f-5def-451e-b9a2-b8aeff6e4bf4 0xc0033658a7 0xc0033658a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003365940} {node.kubernetes.io/unreachable Exists NoExecute 0xc003365960}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.905: INFO: Pod "nginx-deployment-55fb7cb77f-p9wjc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-p9wjc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-55fb7cb77f-p9wjc,UID:23309a50-02f1-426b-8767-54bd05c01475,ResourceVersion:3404553,Generation:0,CreationTimestamp:2020-04-03 14:05:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0c1fa91f-5def-451e-b9a2-b8aeff6e4bf4 0xc0033659e7 0xc0033659e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003365a60} {node.kubernetes.io/unreachable Exists NoExecute 0xc003365a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-03 14:05:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.905: INFO: Pod "nginx-deployment-55fb7cb77f-phtdz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-phtdz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-55fb7cb77f-phtdz,UID:fe2e99e1-8055-40e4-9e85-ea8fce077757,ResourceVersion:3404611,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0c1fa91f-5def-451e-b9a2-b8aeff6e4bf4 0xc003365b50 0xc003365b51}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003365bd0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003365bf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.905: INFO: Pod "nginx-deployment-55fb7cb77f-q5wcx" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-q5wcx,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-55fb7cb77f-q5wcx,UID:830206a6-48f9-4196-b7fd-10d36f230d81,ResourceVersion:3404586,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0c1fa91f-5def-451e-b9a2-b8aeff6e4bf4 0xc003365c77 0xc003365c78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003365cf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc003365d10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.905: INFO: Pod "nginx-deployment-55fb7cb77f-rf2v5" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-rf2v5,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-55fb7cb77f-rf2v5,UID:ff38ab9f-7a7f-451d-a81b-3ce0916b99da,ResourceVersion:3404528,Generation:0,CreationTimestamp:2020-04-03 14:05:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0c1fa91f-5def-451e-b9a2-b8aeff6e4bf4 0xc003365d97 0xc003365d98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003365e10} {node.kubernetes.io/unreachable Exists NoExecute 0xc003365e30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-03 14:05:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.905: INFO: Pod "nginx-deployment-55fb7cb77f-w4hjp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w4hjp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-55fb7cb77f-w4hjp,UID:bb721222-51cb-44d3-a0f4-2e578241c8da,ResourceVersion:3404546,Generation:0,CreationTimestamp:2020-04-03 14:05:53 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0c1fa91f-5def-451e-b9a2-b8aeff6e4bf4 0xc003365f00 0xc003365f01}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003365f80} {node.kubernetes.io/unreachable Exists NoExecute 0xc003365fa0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:53 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-03 14:05:53 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.905: INFO: Pod "nginx-deployment-55fb7cb77f-w5lnt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-w5lnt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-55fb7cb77f-w5lnt,UID:d16c95bf-b28a-47c9-b9c4-70dff59702bb,ResourceVersion:3404607,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0c1fa91f-5def-451e-b9a2-b8aeff6e4bf4 0xc0004480a0 0xc0004480a1}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000448160} {node.kubernetes.io/unreachable Exists NoExecute 0xc000448190}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.906: INFO: Pod "nginx-deployment-55fb7cb77f-xbsnz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xbsnz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-55fb7cb77f-xbsnz,UID:1dcd8347-24d1-4ead-953c-f3080c4b5e51,ResourceVersion:3404626,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 0c1fa91f-5def-451e-b9a2-b8aeff6e4bf4 0xc000448257 0xc000448258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0004482e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000448310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-03 14:05:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.906: INFO: Pod "nginx-deployment-7b8c6f4498-42wb6" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-42wb6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-42wb6,UID:f5e9ee48-ebcf-4bbf-9e5f-1be65b7f37d1,ResourceVersion:3404467,Generation:0,CreationTimestamp:2020-04-03 14:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc000448420 0xc000448421}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000448490} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004484b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.71,StartTime:2020-04-03 14:05:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-03 14:05:50 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://537800ef1ff3ab28172efed07963871897d803f66137475496bec2c83b5aec6f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.906: INFO: Pod "nginx-deployment-7b8c6f4498-4zgr9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4zgr9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-4zgr9,UID:c0657fd9-a290-44fe-ad3f-183e1868a73f,ResourceVersion:3404585,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc000448637 0xc000448638}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0004486c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0004486f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.906: INFO: Pod "nginx-deployment-7b8c6f4498-69vj4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-69vj4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-69vj4,UID:280405b0-e7a0-473e-9c4e-4d3c23b02ca7,ResourceVersion:3404621,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc000448787 0xc000448788}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000448800} {node.kubernetes.io/unreachable Exists NoExecute 0xc000448880}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-03 14:05:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.906: INFO: Pod "nginx-deployment-7b8c6f4498-6df7t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6df7t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-6df7t,UID:e37fe415-cab5-49c3-8f72-bb44b47fcc6e,ResourceVersion:3404610,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc000448967 0xc000448968}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0004489e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000448a00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.906: INFO: Pod "nginx-deployment-7b8c6f4498-8sb25" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8sb25,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-8sb25,UID:b5877705-a41e-458e-a3cf-bddc04777041,ResourceVersion:3404608,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc000449127 0xc000449128}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000449670} {node.kubernetes.io/unreachable Exists NoExecute 0xc000449770}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.906: INFO: Pod "nginx-deployment-7b8c6f4498-bjh2t" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-bjh2t,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-bjh2t,UID:3ae76d55-3db2-42c5-a35d-22cc91e8b1b5,ResourceVersion:3404581,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc000449e27 0xc000449e28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000449eb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000449ed0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.906: INFO: Pod "nginx-deployment-7b8c6f4498-crhw8" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-crhw8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-crhw8,UID:6a9b04c1-ad6f-48df-b53e-f44837b289ce,ResourceVersion:3404602,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc000449f67 0xc000449f68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000449fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000638070}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.906: INFO: Pod "nginx-deployment-7b8c6f4498-csjtm" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-csjtm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-csjtm,UID:5385dab3-a398-49c4-afd1-9cc3c8c5a4e4,ResourceVersion:3404589,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc000638197 0xc000638198}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0006384f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000638510}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.907: INFO: Pod "nginx-deployment-7b8c6f4498-ctlhg" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ctlhg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-ctlhg,UID:f6118a3f-0c2d-4fb1-b5be-9060703b36aa,ResourceVersion:3404479,Generation:0,CreationTimestamp:2020-04-03 14:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc0006385b7 0xc0006385b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0006387f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000638850}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:51 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:51 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.72,StartTime:2020-04-03 14:05:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-03 14:05:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c977916322b8b952462a8304b73cc575b15a24d41e875d6138fdc0d599e1798c}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.907: INFO: Pod "nginx-deployment-7b8c6f4498-fcpz6" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fcpz6,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-fcpz6,UID:edfef455-a06c-4930-a696-64a44f3f3e35,ResourceVersion:3404588,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc000638c07 0xc000638c08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000638c90} {node.kubernetes.io/unreachable Exists NoExecute 0xc000638cb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.907: INFO: Pod "nginx-deployment-7b8c6f4498-gr9g9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gr9g9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-gr9g9,UID:f201c0ba-fc1a-4bf8-93f5-5a4e24021c10,ResourceVersion:3404463,Generation:0,CreationTimestamp:2020-04-03 14:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc0006391f7 0xc0006391f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0006393f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000639440}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:50 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:50 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.160,StartTime:2020-04-03 14:05:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-03 14:05:49 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4144a740d84bc0cddf91d3110c375af724f9edd9d879e4bf8ed3e8bf84f0537e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.907: INFO: Pod "nginx-deployment-7b8c6f4498-j2b2c" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-j2b2c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-j2b2c,UID:80f69ca2-78d8-445c-8dcd-d7a518c15eb5,ResourceVersion:3404590,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc0006396c7 0xc0006396c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0006397b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000639800}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.907: INFO: Pod "nginx-deployment-7b8c6f4498-k29vv" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-k29vv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-k29vv,UID:872528a3-057f-4e8d-985d-b578f6d50f4e,ResourceVersion:3404493,Generation:0,CreationTimestamp:2020-04-03 14:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc000639a37 0xc000639a38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000639b00} {node.kubernetes.io/unreachable Exists NoExecute 0xc000639c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.162,StartTime:2020-04-03 14:05:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-03 14:05:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://fd0d4f93db0be187bf9ed56ca247489b0650ff9b5daa5046368f9023371910d7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.907: INFO: Pod "nginx-deployment-7b8c6f4498-l6twh" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l6twh,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-l6twh,UID:09e861c6-bf9b-4d55-aba7-ee3d44c33169,ResourceVersion:3404614,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc000639d97 0xc000639d98}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c68060} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c68080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-04-03 14:05:55 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.907: INFO: Pod "nginx-deployment-7b8c6f4498-pd4p7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pd4p7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-pd4p7,UID:08bb0d7a-1c31-4843-b087-1c3322f6c22c,ResourceVersion:3404500,Generation:0,CreationTimestamp:2020-04-03 14:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc002c681d7 0xc002c681d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c682d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c682f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.74,StartTime:2020-04-03 14:05:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-03 14:05:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9294b45c94e3d0440234ed7557f9a058639e615956b89fd6ecadfc68162c7c07}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.907: INFO: Pod "nginx-deployment-7b8c6f4498-ql2mc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-ql2mc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-ql2mc,UID:aae5a11d-873b-488a-a473-ed6c70d90960,ResourceVersion:3404604,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc002c68447 0xc002c68448}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c68520} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c68560}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.908: INFO: Pod "nginx-deployment-7b8c6f4498-r7fjd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r7fjd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-r7fjd,UID:b062aba6-5f7e-42c0-87e6-ff70d36a00b3,ResourceVersion:3404447,Generation:0,CreationTimestamp:2020-04-03 14:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc002c68657 0xc002c68658}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c68710} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c68730}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.70,StartTime:2020-04-03 14:05:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-03 14:05:46 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://72c446d81c961ace15e7fc86764f9939dacf4dcbdbe14134f62ac7a07f336981}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.908: INFO: Pod "nginx-deployment-7b8c6f4498-sscxd" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-sscxd,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-sscxd,UID:99d72642-76ac-4f17-83dc-0c320cb5ecaf,ResourceVersion:3404450,Generation:0,CreationTimestamp:2020-04-03 14:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc002c68817 0xc002c68818}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c68890} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c688b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:48 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:48 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.159,StartTime:2020-04-03 14:05:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-03 14:05:47 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://97dc14874456c4b4983b0aa4c454f66a7a69f098126f7ad29a570e32dc07ef35}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.908: INFO: Pod "nginx-deployment-7b8c6f4498-w6ck4" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-w6ck4,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-w6ck4,UID:637e5ab3-325a-410e-92e9-bc019bb731a3,ResourceVersion:3404491,Generation:0,CreationTimestamp:2020-04-03 14:05:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc002c68c27 0xc002c68c28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c68ca0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c68d20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:52 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:52 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.163,StartTime:2020-04-03 14:05:43 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-04-03 14:05:51 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://7524859b0d68cfee2b8a2eb7fb96b13ea56df6193f491acd7ac9eae0ebcd3fa3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} Apr 3 14:05:55.908: INFO: Pod "nginx-deployment-7b8c6f4498-zzv7w" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zzv7w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-4397,SelfLink:/api/v1/namespaces/deployment-4397/pods/nginx-deployment-7b8c6f4498-zzv7w,UID:71bee191-b486-4ed6-8099-b281f4a88013,ResourceVersion:3404605,Generation:0,CreationTimestamp:2020-04-03 14:05:55 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 21a307bb-3bf2-4233-9b5a-d2b4e891a411 0xc002c68e27 0xc002c68e28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-bgxn7 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-bgxn7,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-bgxn7 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002c68ea0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002c68ec0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:05:55 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:05:55.908: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4397" for this suite. Apr 3 14:06:20.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:06:20.219: INFO: namespace deployment-4397 deletion completed in 24.25105718s • [SLOW TEST:36.907 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:06:20.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-2282/configmap-test-978baa39-4bc4-42f5-9106-4bf2f603528f STEP: Creating a pod to test consume configMaps Apr 3 14:06:20.301: INFO: Waiting up to 5m0s for pod "pod-configmaps-3a99f0e4-ca45-4316-9aa5-7487d90db104" in namespace "configmap-2282" to be "success or failure" Apr 3 14:06:20.304: INFO: Pod "pod-configmaps-3a99f0e4-ca45-4316-9aa5-7487d90db104": Phase="Pending", Reason="", readiness=false. Elapsed: 3.033712ms Apr 3 14:06:22.308: INFO: Pod "pod-configmaps-3a99f0e4-ca45-4316-9aa5-7487d90db104": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00662529s Apr 3 14:06:24.312: INFO: Pod "pod-configmaps-3a99f0e4-ca45-4316-9aa5-7487d90db104": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01066277s STEP: Saw pod success Apr 3 14:06:24.312: INFO: Pod "pod-configmaps-3a99f0e4-ca45-4316-9aa5-7487d90db104" satisfied condition "success or failure" Apr 3 14:06:24.315: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-3a99f0e4-ca45-4316-9aa5-7487d90db104 container env-test: STEP: delete the pod Apr 3 14:06:24.336: INFO: Waiting for pod pod-configmaps-3a99f0e4-ca45-4316-9aa5-7487d90db104 to disappear Apr 3 14:06:24.340: INFO: Pod pod-configmaps-3a99f0e4-ca45-4316-9aa5-7487d90db104 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:06:24.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-2282" for this suite. Apr 3 14:06:30.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:06:30.441: INFO: namespace configmap-2282 deletion completed in 6.097831376s • [SLOW TEST:10.222 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:06:30.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-88f57fd2-f315-403a-9503-a98764dd720f in namespace container-probe-5737 Apr 3 14:06:34.500: INFO: Started pod busybox-88f57fd2-f315-403a-9503-a98764dd720f in namespace container-probe-5737 STEP: checking the pod's current state and verifying that restartCount is present Apr 3 14:06:34.504: INFO: Initial restart count of pod busybox-88f57fd2-f315-403a-9503-a98764dd720f is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:10:35.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5737" for this suite. Apr 3 14:10:41.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:10:41.462: INFO: namespace container-probe-5737 deletion completed in 6.103972686s • [SLOW TEST:251.020 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:10:41.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 14:10:41.524: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2749aa25-69ec-4bbf-ab0f-90dbb793ef6e" in namespace "downward-api-7565" to be "success or failure" Apr 3 14:10:41.527: INFO: Pod "downwardapi-volume-2749aa25-69ec-4bbf-ab0f-90dbb793ef6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.869264ms Apr 3 14:10:43.532: INFO: Pod "downwardapi-volume-2749aa25-69ec-4bbf-ab0f-90dbb793ef6e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007280984s Apr 3 14:10:45.536: INFO: Pod "downwardapi-volume-2749aa25-69ec-4bbf-ab0f-90dbb793ef6e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01144823s STEP: Saw pod success Apr 3 14:10:45.536: INFO: Pod "downwardapi-volume-2749aa25-69ec-4bbf-ab0f-90dbb793ef6e" satisfied condition "success or failure" Apr 3 14:10:45.540: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-2749aa25-69ec-4bbf-ab0f-90dbb793ef6e container client-container: STEP: delete the pod Apr 3 14:10:45.575: INFO: Waiting for pod downwardapi-volume-2749aa25-69ec-4bbf-ab0f-90dbb793ef6e to disappear Apr 3 14:10:45.587: INFO: Pod downwardapi-volume-2749aa25-69ec-4bbf-ab0f-90dbb793ef6e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:10:45.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7565" for this suite. Apr 3 14:10:51.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:10:51.682: INFO: namespace downward-api-7565 deletion completed in 6.091942897s • [SLOW TEST:10.219 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:10:51.684: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-02fab67a-e759-4dc1-ab7e-390b379413ef STEP: Creating a pod to test consume configMaps Apr 3 14:10:51.806: INFO: Waiting up to 5m0s for pod "pod-configmaps-77de19e6-590a-43f0-bad5-d75c960993e9" in namespace "configmap-1365" to be "success or failure" Apr 3 14:10:51.809: INFO: Pod "pod-configmaps-77de19e6-590a-43f0-bad5-d75c960993e9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.75591ms Apr 3 14:10:53.814: INFO: Pod "pod-configmaps-77de19e6-590a-43f0-bad5-d75c960993e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008158028s Apr 3 14:10:55.817: INFO: Pod "pod-configmaps-77de19e6-590a-43f0-bad5-d75c960993e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011662105s STEP: Saw pod success Apr 3 14:10:55.817: INFO: Pod "pod-configmaps-77de19e6-590a-43f0-bad5-d75c960993e9" satisfied condition "success or failure" Apr 3 14:10:55.820: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-77de19e6-590a-43f0-bad5-d75c960993e9 container configmap-volume-test: STEP: delete the pod Apr 3 14:10:55.860: INFO: Waiting for pod pod-configmaps-77de19e6-590a-43f0-bad5-d75c960993e9 to disappear Apr 3 14:10:55.876: INFO: Pod pod-configmaps-77de19e6-590a-43f0-bad5-d75c960993e9 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:10:55.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1365" for this suite. Apr 3 14:11:01.891: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:11:01.965: INFO: namespace configmap-1365 deletion completed in 6.085595124s • [SLOW TEST:10.281 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:11:01.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-daf12487-6ecf-4098-8314-ecee4acc654b STEP: Creating a pod to test consume secrets Apr 3 14:11:02.030: INFO: Waiting up to 5m0s for pod "pod-secrets-8847ad65-c7b5-4036-9c4a-8362e863e7bf" in namespace "secrets-8810" to be "success or failure" Apr 3 14:11:02.046: INFO: Pod "pod-secrets-8847ad65-c7b5-4036-9c4a-8362e863e7bf": Phase="Pending", Reason="", readiness=false. Elapsed: 16.074409ms Apr 3 14:11:04.051: INFO: Pod "pod-secrets-8847ad65-c7b5-4036-9c4a-8362e863e7bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020959309s Apr 3 14:11:06.055: INFO: Pod "pod-secrets-8847ad65-c7b5-4036-9c4a-8362e863e7bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025427984s STEP: Saw pod success Apr 3 14:11:06.056: INFO: Pod "pod-secrets-8847ad65-c7b5-4036-9c4a-8362e863e7bf" satisfied condition "success or failure" Apr 3 14:11:06.059: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-8847ad65-c7b5-4036-9c4a-8362e863e7bf container secret-volume-test: STEP: delete the pod Apr 3 14:11:06.125: INFO: Waiting for pod pod-secrets-8847ad65-c7b5-4036-9c4a-8362e863e7bf to disappear Apr 3 14:11:06.129: INFO: Pod pod-secrets-8847ad65-c7b5-4036-9c4a-8362e863e7bf no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:11:06.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8810" for this suite. Apr 3 14:11:12.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:11:12.229: INFO: namespace secrets-8810 deletion completed in 6.096197403s • [SLOW TEST:10.264 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:11:12.229: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Apr 3 14:11:12.315: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6805,SelfLink:/api/v1/namespaces/watch-6805/configmaps/e2e-watch-test-configmap-a,UID:f960abb0-66f9-4ada-8df1-070c28240641,ResourceVersion:3405664,Generation:0,CreationTimestamp:2020-04-03 14:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 3 14:11:12.315: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6805,SelfLink:/api/v1/namespaces/watch-6805/configmaps/e2e-watch-test-configmap-a,UID:f960abb0-66f9-4ada-8df1-070c28240641,ResourceVersion:3405664,Generation:0,CreationTimestamp:2020-04-03 14:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Apr 3 14:11:22.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6805,SelfLink:/api/v1/namespaces/watch-6805/configmaps/e2e-watch-test-configmap-a,UID:f960abb0-66f9-4ada-8df1-070c28240641,ResourceVersion:3405685,Generation:0,CreationTimestamp:2020-04-03 14:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 3 14:11:22.324: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6805,SelfLink:/api/v1/namespaces/watch-6805/configmaps/e2e-watch-test-configmap-a,UID:f960abb0-66f9-4ada-8df1-070c28240641,ResourceVersion:3405685,Generation:0,CreationTimestamp:2020-04-03 14:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Apr 3 14:11:32.333: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6805,SelfLink:/api/v1/namespaces/watch-6805/configmaps/e2e-watch-test-configmap-a,UID:f960abb0-66f9-4ada-8df1-070c28240641,ResourceVersion:3405705,Generation:0,CreationTimestamp:2020-04-03 14:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 3 14:11:32.333: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6805,SelfLink:/api/v1/namespaces/watch-6805/configmaps/e2e-watch-test-configmap-a,UID:f960abb0-66f9-4ada-8df1-070c28240641,ResourceVersion:3405705,Generation:0,CreationTimestamp:2020-04-03 14:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Apr 3 14:11:42.340: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6805,SelfLink:/api/v1/namespaces/watch-6805/configmaps/e2e-watch-test-configmap-a,UID:f960abb0-66f9-4ada-8df1-070c28240641,ResourceVersion:3405726,Generation:0,CreationTimestamp:2020-04-03 14:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 3 14:11:42.340: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-6805,SelfLink:/api/v1/namespaces/watch-6805/configmaps/e2e-watch-test-configmap-a,UID:f960abb0-66f9-4ada-8df1-070c28240641,ResourceVersion:3405726,Generation:0,CreationTimestamp:2020-04-03 14:11:12 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Apr 3 14:11:52.347: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6805,SelfLink:/api/v1/namespaces/watch-6805/configmaps/e2e-watch-test-configmap-b,UID:9eef339d-19a7-44ea-a1fa-6ffd48f8bfdf,ResourceVersion:3405747,Generation:0,CreationTimestamp:2020-04-03 14:11:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 3 14:11:52.347: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6805,SelfLink:/api/v1/namespaces/watch-6805/configmaps/e2e-watch-test-configmap-b,UID:9eef339d-19a7-44ea-a1fa-6ffd48f8bfdf,ResourceVersion:3405747,Generation:0,CreationTimestamp:2020-04-03 14:11:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Apr 3 14:12:02.354: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6805,SelfLink:/api/v1/namespaces/watch-6805/configmaps/e2e-watch-test-configmap-b,UID:9eef339d-19a7-44ea-a1fa-6ffd48f8bfdf,ResourceVersion:3405767,Generation:0,CreationTimestamp:2020-04-03 14:11:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 3 14:12:02.355: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-6805,SelfLink:/api/v1/namespaces/watch-6805/configmaps/e2e-watch-test-configmap-b,UID:9eef339d-19a7-44ea-a1fa-6ffd48f8bfdf,ResourceVersion:3405767,Generation:0,CreationTimestamp:2020-04-03 14:11:52 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:12:12.355: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6805" for this suite. Apr 3 14:12:18.377: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:12:18.457: INFO: namespace watch-6805 deletion completed in 6.097346353s • [SLOW TEST:66.228 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:12:18.458: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9780 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 3 14:12:18.487: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 3 14:12:44.653: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.90:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9780 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 14:12:44.653: INFO: >>> kubeConfig: /root/.kube/config I0403 14:12:44.687125 6 log.go:172] (0xc000a14dc0) (0xc001d14320) Create stream I0403 14:12:44.687155 6 log.go:172] (0xc000a14dc0) (0xc001d14320) Stream added, broadcasting: 1 I0403 14:12:44.690333 6 log.go:172] (0xc000a14dc0) Reply frame received for 1 I0403 14:12:44.690390 6 log.go:172] (0xc000a14dc0) (0xc001d14460) Create stream I0403 14:12:44.690413 6 log.go:172] (0xc000a14dc0) (0xc001d14460) Stream added, broadcasting: 3 I0403 14:12:44.691458 6 log.go:172] (0xc000a14dc0) Reply frame received for 3 I0403 14:12:44.691490 6 log.go:172] (0xc000a14dc0) (0xc00121d9a0) Create stream I0403 14:12:44.691504 6 log.go:172] (0xc000a14dc0) (0xc00121d9a0) Stream added, broadcasting: 5 I0403 14:12:44.692541 6 log.go:172] (0xc000a14dc0) Reply frame received for 5 I0403 14:12:44.794107 6 log.go:172] (0xc000a14dc0) Data frame received for 3 I0403 14:12:44.794143 6 log.go:172] (0xc001d14460) (3) Data frame handling I0403 14:12:44.794153 6 log.go:172] (0xc001d14460) (3) Data frame sent I0403 14:12:44.794161 6 log.go:172] (0xc000a14dc0) Data frame received for 3 I0403 14:12:44.794167 6 log.go:172] (0xc001d14460) (3) Data frame handling I0403 14:12:44.794231 6 log.go:172] (0xc000a14dc0) Data frame received for 5 I0403 14:12:44.794258 6 log.go:172] (0xc00121d9a0) (5) Data frame handling I0403 14:12:44.796191 6 log.go:172] (0xc000a14dc0) Data frame received for 1 I0403 14:12:44.796220 6 log.go:172] (0xc001d14320) (1) Data frame handling I0403 14:12:44.796248 6 log.go:172] (0xc001d14320) (1) Data frame sent I0403 14:12:44.796271 6 log.go:172] (0xc000a14dc0) (0xc001d14320) Stream removed, broadcasting: 1 I0403 14:12:44.796317 6 log.go:172] (0xc000a14dc0) Go away received I0403 14:12:44.796417 6 log.go:172] (0xc000a14dc0) (0xc001d14320) Stream removed, broadcasting: 1 I0403 14:12:44.796442 6 log.go:172] (0xc000a14dc0) (0xc001d14460) Stream removed, broadcasting: 3 I0403 14:12:44.796454 6 log.go:172] (0xc000a14dc0) (0xc00121d9a0) Stream removed, broadcasting: 5 Apr 3 14:12:44.796: INFO: Found all expected endpoints: [netserver-0] Apr 3 14:12:44.800: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.179:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9780 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 14:12:44.800: INFO: >>> kubeConfig: /root/.kube/config I0403 14:12:44.826530 6 log.go:172] (0xc000a3ebb0) (0xc001fb61e0) Create stream I0403 14:12:44.826565 6 log.go:172] (0xc000a3ebb0) (0xc001fb61e0) Stream added, broadcasting: 1 I0403 14:12:44.829280 6 log.go:172] (0xc000a3ebb0) Reply frame received for 1 I0403 14:12:44.829329 6 log.go:172] (0xc000a3ebb0) (0xc001fb6280) Create stream I0403 14:12:44.829344 6 log.go:172] (0xc000a3ebb0) (0xc001fb6280) Stream added, broadcasting: 3 I0403 14:12:44.830236 6 log.go:172] (0xc000a3ebb0) Reply frame received for 3 I0403 14:12:44.830286 6 log.go:172] (0xc000a3ebb0) (0xc001d470e0) Create stream I0403 14:12:44.830303 6 log.go:172] (0xc000a3ebb0) (0xc001d470e0) Stream added, broadcasting: 5 I0403 14:12:44.830992 6 log.go:172] (0xc000a3ebb0) Reply frame received for 5 I0403 14:12:44.886153 6 log.go:172] (0xc000a3ebb0) Data frame received for 5 I0403 14:12:44.886216 6 log.go:172] (0xc001d470e0) (5) Data frame handling I0403 14:12:44.886253 6 log.go:172] (0xc000a3ebb0) Data frame received for 3 I0403 14:12:44.886280 6 log.go:172] (0xc001fb6280) (3) Data frame handling I0403 14:12:44.886302 6 log.go:172] (0xc001fb6280) (3) Data frame sent I0403 14:12:44.886316 6 log.go:172] (0xc000a3ebb0) Data frame received for 3 I0403 14:12:44.886328 6 log.go:172] (0xc001fb6280) (3) Data frame handling I0403 14:12:44.887719 6 log.go:172] (0xc000a3ebb0) Data frame received for 1 I0403 14:12:44.887745 6 log.go:172] (0xc001fb61e0) (1) Data frame handling I0403 14:12:44.887757 6 log.go:172] (0xc001fb61e0) (1) Data frame sent I0403 14:12:44.887769 6 log.go:172] (0xc000a3ebb0) (0xc001fb61e0) Stream removed, broadcasting: 1 I0403 14:12:44.887787 6 log.go:172] (0xc000a3ebb0) Go away received I0403 14:12:44.887878 6 log.go:172] (0xc000a3ebb0) (0xc001fb61e0) Stream removed, broadcasting: 1 I0403 14:12:44.887903 6 log.go:172] (0xc000a3ebb0) (0xc001fb6280) Stream removed, broadcasting: 3 I0403 14:12:44.887916 6 log.go:172] (0xc000a3ebb0) (0xc001d470e0) Stream removed, broadcasting: 5 Apr 3 14:12:44.887: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:12:44.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9780" for this suite. Apr 3 14:13:08.904: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:13:08.979: INFO: namespace pod-network-test-9780 deletion completed in 24.087280794s • [SLOW TEST:50.522 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:13:08.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-91866ccf-064e-47cc-927f-48d1fa95fed6 STEP: Creating a pod to test consume secrets Apr 3 14:13:09.044: INFO: Waiting up to 5m0s for pod "pod-secrets-a40ae39b-9307-4773-aa7b-113cebd073e6" in namespace "secrets-4380" to be "success or failure" Apr 3 14:13:09.065: INFO: Pod "pod-secrets-a40ae39b-9307-4773-aa7b-113cebd073e6": Phase="Pending", Reason="", readiness=false. Elapsed: 20.803764ms Apr 3 14:13:11.069: INFO: Pod "pod-secrets-a40ae39b-9307-4773-aa7b-113cebd073e6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025124556s Apr 3 14:13:13.073: INFO: Pod "pod-secrets-a40ae39b-9307-4773-aa7b-113cebd073e6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029610153s STEP: Saw pod success Apr 3 14:13:13.073: INFO: Pod "pod-secrets-a40ae39b-9307-4773-aa7b-113cebd073e6" satisfied condition "success or failure" Apr 3 14:13:13.076: INFO: Trying to get logs from node iruya-worker pod pod-secrets-a40ae39b-9307-4773-aa7b-113cebd073e6 container secret-env-test: STEP: delete the pod Apr 3 14:13:13.098: INFO: Waiting for pod pod-secrets-a40ae39b-9307-4773-aa7b-113cebd073e6 to disappear Apr 3 14:13:13.114: INFO: Pod pod-secrets-a40ae39b-9307-4773-aa7b-113cebd073e6 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:13:13.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4380" for this suite. Apr 3 14:13:19.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:13:19.214: INFO: namespace secrets-4380 deletion completed in 6.097028278s • [SLOW TEST:10.234 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:13:19.214: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 3 14:13:19.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-4539' Apr 3 14:13:21.720: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 3 14:13:21.720: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Apr 3 14:13:23.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-4539' Apr 3 14:13:23.839: INFO: stderr: "" Apr 3 14:13:23.839: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:13:23.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4539" for this suite. Apr 3 14:15:25.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:15:25.954: INFO: namespace kubectl-4539 deletion completed in 2m2.098611446s • [SLOW TEST:126.740 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:15:25.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 14:15:30.071: INFO: Waiting up to 5m0s for pod "client-envvars-8a41cd58-12fd-4869-9e90-8c9f557cb138" in namespace "pods-8027" to be "success or failure" Apr 3 14:15:30.076: INFO: Pod "client-envvars-8a41cd58-12fd-4869-9e90-8c9f557cb138": Phase="Pending", Reason="", readiness=false. Elapsed: 5.593975ms Apr 3 14:15:32.080: INFO: Pod "client-envvars-8a41cd58-12fd-4869-9e90-8c9f557cb138": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009650274s Apr 3 14:15:34.085: INFO: Pod "client-envvars-8a41cd58-12fd-4869-9e90-8c9f557cb138": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014054777s STEP: Saw pod success Apr 3 14:15:34.085: INFO: Pod "client-envvars-8a41cd58-12fd-4869-9e90-8c9f557cb138" satisfied condition "success or failure" Apr 3 14:15:34.088: INFO: Trying to get logs from node iruya-worker2 pod client-envvars-8a41cd58-12fd-4869-9e90-8c9f557cb138 container env3cont: STEP: delete the pod Apr 3 14:15:34.107: INFO: Waiting for pod client-envvars-8a41cd58-12fd-4869-9e90-8c9f557cb138 to disappear Apr 3 14:15:34.112: INFO: Pod client-envvars-8a41cd58-12fd-4869-9e90-8c9f557cb138 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:15:34.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8027" for this suite. Apr 3 14:16:24.146: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:16:24.226: INFO: namespace pods-8027 deletion completed in 50.111207555s • [SLOW TEST:58.271 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:16:24.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server Apr 3 14:16:24.308: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:16:24.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5186" for this suite. Apr 3 14:16:30.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:16:30.512: INFO: namespace kubectl-5186 deletion completed in 6.111785838s • [SLOW TEST:6.286 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:16:30.513: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 14:16:30.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' Apr 3 14:16:30.710: INFO: stderr: "" Apr 3 14:16:30.710: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.10\", GitCommit:\"1bea6c00a7055edef03f1d4bb58b773fa8917f11\", GitTreeState:\"clean\", BuildDate:\"2020-03-18T15:12:55Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:16:30.710: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8171" for this suite. Apr 3 14:16:36.738: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:16:36.808: INFO: namespace kubectl-8171 deletion completed in 6.09375716s • [SLOW TEST:6.295 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:16:36.809: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 3 14:16:36.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8275' Apr 3 14:16:37.082: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 3 14:16:37.082: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 Apr 3 14:16:37.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8275' Apr 3 14:16:37.219: INFO: stderr: "" Apr 3 14:16:37.219: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:16:37.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8275" for this suite. Apr 3 14:16:43.231: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:16:43.308: INFO: namespace kubectl-8275 deletion completed in 6.085718307s • [SLOW TEST:6.499 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:16:43.309: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Apr 3 14:16:43.368: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Apr 3 14:16:43.389: INFO: Waiting for terminating namespaces to be deleted... Apr 3 14:16:43.392: INFO: Logging pods the kubelet thinks is on node iruya-worker before test Apr 3 14:16:43.398: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 3 14:16:43.398: INFO: Container kube-proxy ready: true, restart count 0 Apr 3 14:16:43.398: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) Apr 3 14:16:43.398: INFO: Container kindnet-cni ready: true, restart count 0 Apr 3 14:16:43.398: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test Apr 3 14:16:43.406: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) Apr 3 14:16:43.406: INFO: Container kube-proxy ready: true, restart count 0 Apr 3 14:16:43.406: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) Apr 3 14:16:43.406: INFO: Container kindnet-cni ready: true, restart count 0 Apr 3 14:16:43.406: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) Apr 3 14:16:43.406: INFO: Container coredns ready: true, restart count 0 Apr 3 14:16:43.406: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) Apr 3 14:16:43.406: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-991d110c-c6ff-4b94-a4e6-246a2a5c78b5 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-991d110c-c6ff-4b94-a4e6-246a2a5c78b5 off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-991d110c-c6ff-4b94-a4e6-246a2a5c78b5 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:16:51.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9999" for this suite. Apr 3 14:17:05.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:17:05.800: INFO: namespace sched-pred-9999 deletion completed in 14.089238032s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:22.491 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:17:05.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-ba0dfd8a-015b-4898-bbeb-f032824271d0 STEP: Creating a pod to test consume configMaps Apr 3 14:17:05.914: INFO: Waiting up to 5m0s for pod "pod-configmaps-fbcecb44-6fe4-496d-92b4-a6a5c6b1e06a" in namespace "configmap-4907" to be "success or failure" Apr 3 14:17:05.921: INFO: Pod "pod-configmaps-fbcecb44-6fe4-496d-92b4-a6a5c6b1e06a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.935549ms Apr 3 14:17:07.934: INFO: Pod "pod-configmaps-fbcecb44-6fe4-496d-92b4-a6a5c6b1e06a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019917351s Apr 3 14:17:09.938: INFO: Pod "pod-configmaps-fbcecb44-6fe4-496d-92b4-a6a5c6b1e06a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024221036s STEP: Saw pod success Apr 3 14:17:09.938: INFO: Pod "pod-configmaps-fbcecb44-6fe4-496d-92b4-a6a5c6b1e06a" satisfied condition "success or failure" Apr 3 14:17:09.943: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-fbcecb44-6fe4-496d-92b4-a6a5c6b1e06a container configmap-volume-test: STEP: delete the pod Apr 3 14:17:10.118: INFO: Waiting for pod pod-configmaps-fbcecb44-6fe4-496d-92b4-a6a5c6b1e06a to disappear Apr 3 14:17:10.126: INFO: Pod pod-configmaps-fbcecb44-6fe4-496d-92b4-a6a5c6b1e06a no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:17:10.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4907" for this suite. Apr 3 14:17:16.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:17:16.226: INFO: namespace configmap-4907 deletion completed in 6.097460254s • [SLOW TEST:10.425 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:17:16.226: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-f4ea383f-551d-46c4-8f6a-1ab06772d293 STEP: Creating a pod to test consume configMaps Apr 3 14:17:16.307: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-97a8b93d-c785-4b4c-8d05-9390c114f692" in namespace "projected-9420" to be "success or failure" Apr 3 14:17:16.311: INFO: Pod "pod-projected-configmaps-97a8b93d-c785-4b4c-8d05-9390c114f692": Phase="Pending", Reason="", readiness=false. Elapsed: 4.156881ms Apr 3 14:17:18.323: INFO: Pod "pod-projected-configmaps-97a8b93d-c785-4b4c-8d05-9390c114f692": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016432858s Apr 3 14:17:20.328: INFO: Pod "pod-projected-configmaps-97a8b93d-c785-4b4c-8d05-9390c114f692": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020761686s STEP: Saw pod success Apr 3 14:17:20.328: INFO: Pod "pod-projected-configmaps-97a8b93d-c785-4b4c-8d05-9390c114f692" satisfied condition "success or failure" Apr 3 14:17:20.331: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-97a8b93d-c785-4b4c-8d05-9390c114f692 container projected-configmap-volume-test: STEP: delete the pod Apr 3 14:17:20.363: INFO: Waiting for pod pod-projected-configmaps-97a8b93d-c785-4b4c-8d05-9390c114f692 to disappear Apr 3 14:17:20.378: INFO: Pod pod-projected-configmaps-97a8b93d-c785-4b4c-8d05-9390c114f692 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:17:20.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9420" for this suite. Apr 3 14:17:26.393: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:17:26.478: INFO: namespace projected-9420 deletion completed in 6.097244946s • [SLOW TEST:10.252 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:17:26.479: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 14:17:26.517: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4c3cf9de-d356-4bdc-8bbe-6e488fc32981" in namespace "downward-api-9359" to be "success or failure" Apr 3 14:17:26.530: INFO: Pod "downwardapi-volume-4c3cf9de-d356-4bdc-8bbe-6e488fc32981": Phase="Pending", Reason="", readiness=false. Elapsed: 13.571667ms Apr 3 14:17:28.535: INFO: Pod "downwardapi-volume-4c3cf9de-d356-4bdc-8bbe-6e488fc32981": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018034973s Apr 3 14:17:30.540: INFO: Pod "downwardapi-volume-4c3cf9de-d356-4bdc-8bbe-6e488fc32981": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022805125s STEP: Saw pod success Apr 3 14:17:30.540: INFO: Pod "downwardapi-volume-4c3cf9de-d356-4bdc-8bbe-6e488fc32981" satisfied condition "success or failure" Apr 3 14:17:30.543: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-4c3cf9de-d356-4bdc-8bbe-6e488fc32981 container client-container: STEP: delete the pod Apr 3 14:17:30.581: INFO: Waiting for pod downwardapi-volume-4c3cf9de-d356-4bdc-8bbe-6e488fc32981 to disappear Apr 3 14:17:30.592: INFO: Pod downwardapi-volume-4c3cf9de-d356-4bdc-8bbe-6e488fc32981 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:17:30.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9359" for this suite. Apr 3 14:17:36.608: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:17:36.702: INFO: namespace downward-api-9359 deletion completed in 6.105917507s • [SLOW TEST:10.223 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:17:36.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 Apr 3 14:17:36.776: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. Apr 3 14:17:37.244: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set Apr 3 14:17:39.311: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721520257, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721520257, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721520257, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721520257, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 14:17:41.945: INFO: Waited 622.679331ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:17:42.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-2596" for this suite. Apr 3 14:17:48.567: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:17:48.643: INFO: namespace aggregator-2596 deletion completed in 6.262474584s • [SLOW TEST:11.941 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:17:48.643: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command Apr 3 14:17:48.734: INFO: Waiting up to 5m0s for pod "client-containers-b295b0cb-3869-4bb9-a7ee-1e4be49917e4" in namespace "containers-6724" to be "success or failure" Apr 3 14:17:48.737: INFO: Pod "client-containers-b295b0cb-3869-4bb9-a7ee-1e4be49917e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.670117ms Apr 3 14:17:50.749: INFO: Pod "client-containers-b295b0cb-3869-4bb9-a7ee-1e4be49917e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01481437s Apr 3 14:17:52.753: INFO: Pod "client-containers-b295b0cb-3869-4bb9-a7ee-1e4be49917e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018080036s STEP: Saw pod success Apr 3 14:17:52.753: INFO: Pod "client-containers-b295b0cb-3869-4bb9-a7ee-1e4be49917e4" satisfied condition "success or failure" Apr 3 14:17:52.755: INFO: Trying to get logs from node iruya-worker pod client-containers-b295b0cb-3869-4bb9-a7ee-1e4be49917e4 container test-container: STEP: delete the pod Apr 3 14:17:52.780: INFO: Waiting for pod client-containers-b295b0cb-3869-4bb9-a7ee-1e4be49917e4 to disappear Apr 3 14:17:52.798: INFO: Pod client-containers-b295b0cb-3869-4bb9-a7ee-1e4be49917e4 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:17:52.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6724" for this suite. Apr 3 14:17:58.812: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:17:58.892: INFO: namespace containers-6724 deletion completed in 6.091211888s • [SLOW TEST:10.249 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:17:58.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 3 14:18:03.491: INFO: Successfully updated pod "pod-update-activedeadlineseconds-d8cb1227-e9fd-4c76-a8d1-ceec17534094" Apr 3 14:18:03.491: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-d8cb1227-e9fd-4c76-a8d1-ceec17534094" in namespace "pods-2096" to be "terminated due to deadline exceeded" Apr 3 14:18:03.516: INFO: Pod "pod-update-activedeadlineseconds-d8cb1227-e9fd-4c76-a8d1-ceec17534094": Phase="Running", Reason="", readiness=true. Elapsed: 25.340945ms Apr 3 14:18:05.520: INFO: Pod "pod-update-activedeadlineseconds-d8cb1227-e9fd-4c76-a8d1-ceec17534094": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.029631455s Apr 3 14:18:05.520: INFO: Pod "pod-update-activedeadlineseconds-d8cb1227-e9fd-4c76-a8d1-ceec17534094" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:18:05.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2096" for this suite. Apr 3 14:18:11.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:18:11.636: INFO: namespace pods-2096 deletion completed in 6.113222608s • [SLOW TEST:12.743 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:18:11.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 14:18:11.801: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"54c68433-ea96-48e9-8098-6a567d5bf2ff", Controller:(*bool)(0xc0026e7a72), BlockOwnerDeletion:(*bool)(0xc0026e7a73)}} Apr 3 14:18:11.810: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"268a2fdb-d7f2-4d4a-9923-d80bbb71bbac", Controller:(*bool)(0xc00223ac2a), BlockOwnerDeletion:(*bool)(0xc00223ac2b)}} Apr 3 14:18:11.848: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2afd225b-bd60-4879-bb61-ff1faf58e203", Controller:(*bool)(0xc0026e7c22), BlockOwnerDeletion:(*bool)(0xc0026e7c23)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:18:16.884: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5541" for this suite. Apr 3 14:18:22.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:18:23.007: INFO: namespace gc-5541 deletion completed in 6.119264091s • [SLOW TEST:11.370 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:18:23.007: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-1d87419d-7c51-440e-a475-271f21abf806 STEP: Creating a pod to test consume secrets Apr 3 14:18:23.099: INFO: Waiting up to 5m0s for pod "pod-secrets-8b519997-155a-4e12-b4c1-857719729b58" in namespace "secrets-2865" to be "success or failure" Apr 3 14:18:23.111: INFO: Pod "pod-secrets-8b519997-155a-4e12-b4c1-857719729b58": Phase="Pending", Reason="", readiness=false. Elapsed: 11.819319ms Apr 3 14:18:25.114: INFO: Pod "pod-secrets-8b519997-155a-4e12-b4c1-857719729b58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015010386s Apr 3 14:18:27.119: INFO: Pod "pod-secrets-8b519997-155a-4e12-b4c1-857719729b58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019270573s STEP: Saw pod success Apr 3 14:18:27.119: INFO: Pod "pod-secrets-8b519997-155a-4e12-b4c1-857719729b58" satisfied condition "success or failure" Apr 3 14:18:27.122: INFO: Trying to get logs from node iruya-worker pod pod-secrets-8b519997-155a-4e12-b4c1-857719729b58 container secret-volume-test: STEP: delete the pod Apr 3 14:18:27.141: INFO: Waiting for pod pod-secrets-8b519997-155a-4e12-b4c1-857719729b58 to disappear Apr 3 14:18:27.145: INFO: Pod pod-secrets-8b519997-155a-4e12-b4c1-857719729b58 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:18:27.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2865" for this suite. Apr 3 14:18:33.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:18:33.242: INFO: namespace secrets-2865 deletion completed in 6.092964539s • [SLOW TEST:10.235 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:18:33.243: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-bbb4b53d-150c-4142-a961-0a78e8e56742 STEP: Creating a pod to test consume configMaps Apr 3 14:18:33.303: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-75def8b4-5bb5-4926-8e78-c6ee3d653b7c" in namespace "projected-1569" to be "success or failure" Apr 3 14:18:33.307: INFO: Pod "pod-projected-configmaps-75def8b4-5bb5-4926-8e78-c6ee3d653b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.630515ms Apr 3 14:18:35.326: INFO: Pod "pod-projected-configmaps-75def8b4-5bb5-4926-8e78-c6ee3d653b7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023323375s Apr 3 14:18:37.331: INFO: Pod "pod-projected-configmaps-75def8b4-5bb5-4926-8e78-c6ee3d653b7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.027786025s STEP: Saw pod success Apr 3 14:18:37.331: INFO: Pod "pod-projected-configmaps-75def8b4-5bb5-4926-8e78-c6ee3d653b7c" satisfied condition "success or failure" Apr 3 14:18:37.334: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-75def8b4-5bb5-4926-8e78-c6ee3d653b7c container projected-configmap-volume-test: STEP: delete the pod Apr 3 14:18:37.355: INFO: Waiting for pod pod-projected-configmaps-75def8b4-5bb5-4926-8e78-c6ee3d653b7c to disappear Apr 3 14:18:37.360: INFO: Pod pod-projected-configmaps-75def8b4-5bb5-4926-8e78-c6ee3d653b7c no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:18:37.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1569" for this suite. Apr 3 14:18:43.414: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:18:43.488: INFO: namespace projected-1569 deletion completed in 6.108143974s • [SLOW TEST:10.245 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:18:43.488: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 3 14:18:43.548: INFO: Waiting up to 5m0s for pod "pod-fd838da9-8cae-410c-822b-3ee5db1befa8" in namespace "emptydir-2431" to be "success or failure" Apr 3 14:18:43.558: INFO: Pod "pod-fd838da9-8cae-410c-822b-3ee5db1befa8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.757178ms Apr 3 14:18:45.562: INFO: Pod "pod-fd838da9-8cae-410c-822b-3ee5db1befa8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014358742s Apr 3 14:18:47.565: INFO: Pod "pod-fd838da9-8cae-410c-822b-3ee5db1befa8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017747949s STEP: Saw pod success Apr 3 14:18:47.566: INFO: Pod "pod-fd838da9-8cae-410c-822b-3ee5db1befa8" satisfied condition "success or failure" Apr 3 14:18:47.568: INFO: Trying to get logs from node iruya-worker2 pod pod-fd838da9-8cae-410c-822b-3ee5db1befa8 container test-container: STEP: delete the pod Apr 3 14:18:47.584: INFO: Waiting for pod pod-fd838da9-8cae-410c-822b-3ee5db1befa8 to disappear Apr 3 14:18:47.620: INFO: Pod pod-fd838da9-8cae-410c-822b-3ee5db1befa8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:18:47.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2431" for this suite. Apr 3 14:18:53.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:18:53.773: INFO: namespace emptydir-2431 deletion completed in 6.150148557s • [SLOW TEST:10.284 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:18:53.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-6282 I0403 14:18:53.828821 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6282, replica count: 1 I0403 14:18:54.879307 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0403 14:18:55.879557 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0403 14:18:56.879772 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 3 14:18:57.006: INFO: Created: latency-svc-nfbrp Apr 3 14:18:57.014: INFO: Got endpoints: latency-svc-nfbrp [34.534563ms] Apr 3 14:18:57.057: INFO: Created: latency-svc-64n5t Apr 3 14:18:57.068: INFO: Got endpoints: latency-svc-64n5t [54.073055ms] Apr 3 14:18:57.110: INFO: Created: latency-svc-dqp8c Apr 3 14:18:57.123: INFO: Got endpoints: latency-svc-dqp8c [108.32738ms] Apr 3 14:18:57.140: INFO: Created: latency-svc-fwvqn Apr 3 14:18:57.153: INFO: Got endpoints: latency-svc-fwvqn [138.638593ms] Apr 3 14:18:57.170: INFO: Created: latency-svc-qwhfx Apr 3 14:18:57.183: INFO: Got endpoints: latency-svc-qwhfx [169.029372ms] Apr 3 14:18:57.198: INFO: Created: latency-svc-hnxnc Apr 3 14:18:57.236: INFO: Got endpoints: latency-svc-hnxnc [221.91475ms] Apr 3 14:18:57.248: INFO: Created: latency-svc-4kghw Apr 3 14:18:57.262: INFO: Got endpoints: latency-svc-4kghw [247.652758ms] Apr 3 14:18:57.301: INFO: Created: latency-svc-m8w5x Apr 3 14:18:57.328: INFO: Got endpoints: latency-svc-m8w5x [314.119565ms] Apr 3 14:18:57.386: INFO: Created: latency-svc-jldnv Apr 3 14:18:57.418: INFO: Got endpoints: latency-svc-jldnv [403.774641ms] Apr 3 14:18:57.440: INFO: Created: latency-svc-c2xhf Apr 3 14:18:57.456: INFO: Got endpoints: latency-svc-c2xhf [441.25944ms] Apr 3 14:18:57.515: INFO: Created: latency-svc-dglhv Apr 3 14:18:57.521: INFO: Got endpoints: latency-svc-dglhv [506.287788ms] Apr 3 14:18:57.546: INFO: Created: latency-svc-ftd24 Apr 3 14:18:57.563: INFO: Got endpoints: latency-svc-ftd24 [107.147389ms] Apr 3 14:18:57.584: INFO: Created: latency-svc-jbhww Apr 3 14:18:57.599: INFO: Got endpoints: latency-svc-jbhww [584.935244ms] Apr 3 14:18:57.675: INFO: Created: latency-svc-jhg4c Apr 3 14:18:57.678: INFO: Got endpoints: latency-svc-jhg4c [663.418052ms] Apr 3 14:18:57.720: INFO: Created: latency-svc-pwvx5 Apr 3 14:18:57.732: INFO: Got endpoints: latency-svc-pwvx5 [717.586237ms] Apr 3 14:18:57.750: INFO: Created: latency-svc-pkknt Apr 3 14:18:57.762: INFO: Got endpoints: latency-svc-pkknt [747.34253ms] Apr 3 14:18:57.830: INFO: Created: latency-svc-qlqbq Apr 3 14:18:57.835: INFO: Got endpoints: latency-svc-qlqbq [820.053559ms] Apr 3 14:18:57.854: INFO: Created: latency-svc-ftlzx Apr 3 14:18:57.865: INFO: Got endpoints: latency-svc-ftlzx [796.871429ms] Apr 3 14:18:57.890: INFO: Created: latency-svc-zrkrq Apr 3 14:18:57.911: INFO: Got endpoints: latency-svc-zrkrq [788.783481ms] Apr 3 14:18:57.992: INFO: Created: latency-svc-kfcbq Apr 3 14:18:57.995: INFO: Got endpoints: latency-svc-kfcbq [841.933642ms] Apr 3 14:18:58.022: INFO: Created: latency-svc-th2hk Apr 3 14:18:58.058: INFO: Got endpoints: latency-svc-th2hk [874.606577ms] Apr 3 14:18:58.129: INFO: Created: latency-svc-rpv6d Apr 3 14:18:58.158: INFO: Created: latency-svc-v7zfk Apr 3 14:18:58.158: INFO: Got endpoints: latency-svc-rpv6d [921.48317ms] Apr 3 14:18:58.182: INFO: Got endpoints: latency-svc-v7zfk [919.853767ms] Apr 3 14:18:58.212: INFO: Created: latency-svc-bckqf Apr 3 14:18:58.220: INFO: Got endpoints: latency-svc-bckqf [891.554385ms] Apr 3 14:18:58.273: INFO: Created: latency-svc-zwffp Apr 3 14:18:58.280: INFO: Got endpoints: latency-svc-zwffp [861.989543ms] Apr 3 14:18:58.297: INFO: Created: latency-svc-6zlks Apr 3 14:18:58.310: INFO: Got endpoints: latency-svc-6zlks [789.579915ms] Apr 3 14:18:58.332: INFO: Created: latency-svc-j6lch Apr 3 14:18:58.347: INFO: Got endpoints: latency-svc-j6lch [783.829748ms] Apr 3 14:18:58.368: INFO: Created: latency-svc-f7zb4 Apr 3 14:18:58.405: INFO: Got endpoints: latency-svc-f7zb4 [805.031886ms] Apr 3 14:18:58.416: INFO: Created: latency-svc-c4tm4 Apr 3 14:18:58.425: INFO: Got endpoints: latency-svc-c4tm4 [747.478429ms] Apr 3 14:18:58.448: INFO: Created: latency-svc-w6nh5 Apr 3 14:18:58.468: INFO: Got endpoints: latency-svc-w6nh5 [735.703287ms] Apr 3 14:18:58.502: INFO: Created: latency-svc-c87rv Apr 3 14:18:58.560: INFO: Got endpoints: latency-svc-c87rv [798.059799ms] Apr 3 14:18:58.563: INFO: Created: latency-svc-bjlcp Apr 3 14:18:58.577: INFO: Got endpoints: latency-svc-bjlcp [742.334378ms] Apr 3 14:18:58.602: INFO: Created: latency-svc-jcgfx Apr 3 14:18:58.613: INFO: Got endpoints: latency-svc-jcgfx [747.489727ms] Apr 3 14:18:58.634: INFO: Created: latency-svc-lkphg Apr 3 14:18:58.649: INFO: Got endpoints: latency-svc-lkphg [737.548851ms] Apr 3 14:18:58.699: INFO: Created: latency-svc-xs5vq Apr 3 14:18:58.711: INFO: Got endpoints: latency-svc-xs5vq [716.16851ms] Apr 3 14:18:58.740: INFO: Created: latency-svc-jg9l6 Apr 3 14:18:58.752: INFO: Got endpoints: latency-svc-jg9l6 [693.671322ms] Apr 3 14:18:58.770: INFO: Created: latency-svc-rm2pl Apr 3 14:18:58.782: INFO: Got endpoints: latency-svc-rm2pl [624.051712ms] Apr 3 14:18:58.855: INFO: Created: latency-svc-7w9gz Apr 3 14:18:58.857: INFO: Got endpoints: latency-svc-7w9gz [675.296372ms] Apr 3 14:18:58.922: INFO: Created: latency-svc-qxs8n Apr 3 14:18:58.939: INFO: Got endpoints: latency-svc-qxs8n [718.463712ms] Apr 3 14:18:58.991: INFO: Created: latency-svc-2sjjs Apr 3 14:18:59.034: INFO: Got endpoints: latency-svc-2sjjs [753.288418ms] Apr 3 14:18:59.034: INFO: Created: latency-svc-svx4l Apr 3 14:18:59.053: INFO: Got endpoints: latency-svc-svx4l [742.393097ms] Apr 3 14:18:59.071: INFO: Created: latency-svc-bpkt4 Apr 3 14:18:59.141: INFO: Got endpoints: latency-svc-bpkt4 [793.795976ms] Apr 3 14:18:59.156: INFO: Created: latency-svc-bj6tw Apr 3 14:18:59.167: INFO: Got endpoints: latency-svc-bj6tw [762.560515ms] Apr 3 14:18:59.185: INFO: Created: latency-svc-5c626 Apr 3 14:18:59.198: INFO: Got endpoints: latency-svc-5c626 [772.221184ms] Apr 3 14:18:59.221: INFO: Created: latency-svc-qqn8z Apr 3 14:18:59.308: INFO: Got endpoints: latency-svc-qqn8z [840.68035ms] Apr 3 14:18:59.310: INFO: Created: latency-svc-fdt6w Apr 3 14:18:59.319: INFO: Got endpoints: latency-svc-fdt6w [759.436971ms] Apr 3 14:18:59.348: INFO: Created: latency-svc-fmq54 Apr 3 14:18:59.360: INFO: Got endpoints: latency-svc-fmq54 [782.972889ms] Apr 3 14:18:59.381: INFO: Created: latency-svc-26nwv Apr 3 14:18:59.398: INFO: Got endpoints: latency-svc-26nwv [784.91354ms] Apr 3 14:18:59.471: INFO: Created: latency-svc-phv2r Apr 3 14:18:59.475: INFO: Got endpoints: latency-svc-phv2r [825.464203ms] Apr 3 14:18:59.504: INFO: Created: latency-svc-57r25 Apr 3 14:18:59.517: INFO: Got endpoints: latency-svc-57r25 [805.925189ms] Apr 3 14:18:59.621: INFO: Created: latency-svc-kpd2s Apr 3 14:18:59.624: INFO: Got endpoints: latency-svc-kpd2s [872.484756ms] Apr 3 14:18:59.660: INFO: Created: latency-svc-vpbrn Apr 3 14:18:59.674: INFO: Got endpoints: latency-svc-vpbrn [892.138256ms] Apr 3 14:18:59.702: INFO: Created: latency-svc-9nn9c Apr 3 14:18:59.716: INFO: Got endpoints: latency-svc-9nn9c [858.929268ms] Apr 3 14:18:59.770: INFO: Created: latency-svc-gwvlh Apr 3 14:18:59.785: INFO: Got endpoints: latency-svc-gwvlh [846.720792ms] Apr 3 14:18:59.816: INFO: Created: latency-svc-f2dl4 Apr 3 14:18:59.831: INFO: Got endpoints: latency-svc-f2dl4 [797.021211ms] Apr 3 14:18:59.852: INFO: Created: latency-svc-5kczt Apr 3 14:18:59.868: INFO: Got endpoints: latency-svc-5kczt [815.05728ms] Apr 3 14:18:59.914: INFO: Created: latency-svc-482np Apr 3 14:18:59.917: INFO: Got endpoints: latency-svc-482np [776.476235ms] Apr 3 14:18:59.958: INFO: Created: latency-svc-w8pbc Apr 3 14:18:59.970: INFO: Got endpoints: latency-svc-w8pbc [802.60721ms] Apr 3 14:19:00.003: INFO: Created: latency-svc-zlbzz Apr 3 14:19:00.045: INFO: Got endpoints: latency-svc-zlbzz [847.459187ms] Apr 3 14:19:00.056: INFO: Created: latency-svc-bcfkl Apr 3 14:19:00.072: INFO: Got endpoints: latency-svc-bcfkl [763.934469ms] Apr 3 14:19:00.090: INFO: Created: latency-svc-8nxgx Apr 3 14:19:00.103: INFO: Got endpoints: latency-svc-8nxgx [783.042412ms] Apr 3 14:19:00.126: INFO: Created: latency-svc-n84w7 Apr 3 14:19:00.139: INFO: Got endpoints: latency-svc-n84w7 [778.598961ms] Apr 3 14:19:00.190: INFO: Created: latency-svc-mnxlv Apr 3 14:19:00.193: INFO: Got endpoints: latency-svc-mnxlv [794.937992ms] Apr 3 14:19:00.242: INFO: Created: latency-svc-7lfzf Apr 3 14:19:00.256: INFO: Got endpoints: latency-svc-7lfzf [781.870288ms] Apr 3 14:19:00.328: INFO: Created: latency-svc-x7cqq Apr 3 14:19:00.330: INFO: Got endpoints: latency-svc-x7cqq [812.794174ms] Apr 3 14:19:00.360: INFO: Created: latency-svc-8x4c8 Apr 3 14:19:00.374: INFO: Got endpoints: latency-svc-8x4c8 [749.640612ms] Apr 3 14:19:00.396: INFO: Created: latency-svc-vzlzm Apr 3 14:19:00.415: INFO: Got endpoints: latency-svc-vzlzm [741.02886ms] Apr 3 14:19:00.476: INFO: Created: latency-svc-2pw4l Apr 3 14:19:00.489: INFO: Got endpoints: latency-svc-2pw4l [772.478222ms] Apr 3 14:19:00.524: INFO: Created: latency-svc-qc7cd Apr 3 14:19:00.543: INFO: Got endpoints: latency-svc-qc7cd [757.054638ms] Apr 3 14:19:00.603: INFO: Created: latency-svc-26ftk Apr 3 14:19:00.609: INFO: Got endpoints: latency-svc-26ftk [778.478041ms] Apr 3 14:19:00.630: INFO: Created: latency-svc-tgw56 Apr 3 14:19:00.639: INFO: Got endpoints: latency-svc-tgw56 [771.275334ms] Apr 3 14:19:00.660: INFO: Created: latency-svc-vm62z Apr 3 14:19:00.669: INFO: Got endpoints: latency-svc-vm62z [752.136035ms] Apr 3 14:19:00.692: INFO: Created: latency-svc-fz7df Apr 3 14:19:00.727: INFO: Got endpoints: latency-svc-fz7df [757.331283ms] Apr 3 14:19:00.752: INFO: Created: latency-svc-64jnv Apr 3 14:19:00.780: INFO: Got endpoints: latency-svc-64jnv [734.641216ms] Apr 3 14:19:00.811: INFO: Created: latency-svc-s4vjp Apr 3 14:19:00.883: INFO: Got endpoints: latency-svc-s4vjp [810.868189ms] Apr 3 14:19:00.886: INFO: Created: latency-svc-6nxbw Apr 3 14:19:00.892: INFO: Got endpoints: latency-svc-6nxbw [789.768235ms] Apr 3 14:19:00.932: INFO: Created: latency-svc-2tjwp Apr 3 14:19:00.947: INFO: Got endpoints: latency-svc-2tjwp [808.724995ms] Apr 3 14:19:00.980: INFO: Created: latency-svc-xdlh6 Apr 3 14:19:01.033: INFO: Got endpoints: latency-svc-xdlh6 [840.244726ms] Apr 3 14:19:01.035: INFO: Created: latency-svc-fzqgl Apr 3 14:19:01.043: INFO: Got endpoints: latency-svc-fzqgl [786.603766ms] Apr 3 14:19:01.062: INFO: Created: latency-svc-ghtkr Apr 3 14:19:01.074: INFO: Got endpoints: latency-svc-ghtkr [743.459882ms] Apr 3 14:19:01.094: INFO: Created: latency-svc-7qqnf Apr 3 14:19:01.110: INFO: Got endpoints: latency-svc-7qqnf [735.980477ms] Apr 3 14:19:01.129: INFO: Created: latency-svc-b82bx Apr 3 14:19:01.189: INFO: Got endpoints: latency-svc-b82bx [773.462405ms] Apr 3 14:19:01.191: INFO: Created: latency-svc-8tthh Apr 3 14:19:01.194: INFO: Got endpoints: latency-svc-8tthh [705.407053ms] Apr 3 14:19:01.224: INFO: Created: latency-svc-t6n5v Apr 3 14:19:01.249: INFO: Got endpoints: latency-svc-t6n5v [706.192219ms] Apr 3 14:19:01.280: INFO: Created: latency-svc-8tmsg Apr 3 14:19:01.315: INFO: Got endpoints: latency-svc-8tmsg [705.321942ms] Apr 3 14:19:01.340: INFO: Created: latency-svc-vwgs7 Apr 3 14:19:01.363: INFO: Got endpoints: latency-svc-vwgs7 [724.025573ms] Apr 3 14:19:01.380: INFO: Created: latency-svc-j8gks Apr 3 14:19:01.400: INFO: Got endpoints: latency-svc-j8gks [730.105603ms] Apr 3 14:19:01.489: INFO: Created: latency-svc-kx6r9 Apr 3 14:19:01.492: INFO: Got endpoints: latency-svc-kx6r9 [764.750276ms] Apr 3 14:19:01.520: INFO: Created: latency-svc-8z6wv Apr 3 14:19:01.550: INFO: Got endpoints: latency-svc-8z6wv [770.303516ms] Apr 3 14:19:01.580: INFO: Created: latency-svc-4tvbw Apr 3 14:19:01.632: INFO: Got endpoints: latency-svc-4tvbw [748.30114ms] Apr 3 14:19:01.634: INFO: Created: latency-svc-sktgx Apr 3 14:19:01.640: INFO: Got endpoints: latency-svc-sktgx [747.886714ms] Apr 3 14:19:01.662: INFO: Created: latency-svc-crppg Apr 3 14:19:01.683: INFO: Got endpoints: latency-svc-crppg [735.589747ms] Apr 3 14:19:01.710: INFO: Created: latency-svc-twk6k Apr 3 14:19:01.725: INFO: Got endpoints: latency-svc-twk6k [692.198905ms] Apr 3 14:19:01.782: INFO: Created: latency-svc-ztxqg Apr 3 14:19:01.785: INFO: Got endpoints: latency-svc-ztxqg [741.969068ms] Apr 3 14:19:01.860: INFO: Created: latency-svc-l5wnl Apr 3 14:19:01.870: INFO: Got endpoints: latency-svc-l5wnl [796.145718ms] Apr 3 14:19:01.926: INFO: Created: latency-svc-pqpb7 Apr 3 14:19:01.928: INFO: Got endpoints: latency-svc-pqpb7 [817.855754ms] Apr 3 14:19:01.958: INFO: Created: latency-svc-hj2lq Apr 3 14:19:01.994: INFO: Got endpoints: latency-svc-hj2lq [804.932417ms] Apr 3 14:19:02.027: INFO: Created: latency-svc-zclb6 Apr 3 14:19:02.081: INFO: Got endpoints: latency-svc-zclb6 [886.261929ms] Apr 3 14:19:02.083: INFO: Created: latency-svc-2r9t5 Apr 3 14:19:02.087: INFO: Got endpoints: latency-svc-2r9t5 [837.767156ms] Apr 3 14:19:02.106: INFO: Created: latency-svc-drt7j Apr 3 14:19:02.117: INFO: Got endpoints: latency-svc-drt7j [802.114028ms] Apr 3 14:19:02.150: INFO: Created: latency-svc-fqqxl Apr 3 14:19:02.167: INFO: Got endpoints: latency-svc-fqqxl [803.499456ms] Apr 3 14:19:02.225: INFO: Created: latency-svc-msm97 Apr 3 14:19:02.227: INFO: Got endpoints: latency-svc-msm97 [827.736044ms] Apr 3 14:19:02.256: INFO: Created: latency-svc-949lw Apr 3 14:19:02.269: INFO: Got endpoints: latency-svc-949lw [777.177628ms] Apr 3 14:19:02.288: INFO: Created: latency-svc-lhn52 Apr 3 14:19:02.300: INFO: Got endpoints: latency-svc-lhn52 [749.997089ms] Apr 3 14:19:02.324: INFO: Created: latency-svc-qxdv4 Apr 3 14:19:02.368: INFO: Got endpoints: latency-svc-qxdv4 [736.175169ms] Apr 3 14:19:02.378: INFO: Created: latency-svc-xzccv Apr 3 14:19:02.403: INFO: Got endpoints: latency-svc-xzccv [762.646465ms] Apr 3 14:19:02.424: INFO: Created: latency-svc-xm7xb Apr 3 14:19:02.454: INFO: Got endpoints: latency-svc-xm7xb [770.810301ms] Apr 3 14:19:02.536: INFO: Created: latency-svc-678n6 Apr 3 14:19:02.539: INFO: Got endpoints: latency-svc-678n6 [813.863101ms] Apr 3 14:19:02.564: INFO: Created: latency-svc-4cgln Apr 3 14:19:02.577: INFO: Got endpoints: latency-svc-4cgln [791.770643ms] Apr 3 14:19:02.598: INFO: Created: latency-svc-sjgqn Apr 3 14:19:02.614: INFO: Got endpoints: latency-svc-sjgqn [744.154826ms] Apr 3 14:19:02.680: INFO: Created: latency-svc-cw2hc Apr 3 14:19:02.706: INFO: Got endpoints: latency-svc-cw2hc [777.879108ms] Apr 3 14:19:02.706: INFO: Created: latency-svc-r5vxv Apr 3 14:19:02.722: INFO: Got endpoints: latency-svc-r5vxv [728.398627ms] Apr 3 14:19:02.750: INFO: Created: latency-svc-5zj7v Apr 3 14:19:02.759: INFO: Got endpoints: latency-svc-5zj7v [677.800103ms] Apr 3 14:19:02.780: INFO: Created: latency-svc-klq7g Apr 3 14:19:02.835: INFO: Got endpoints: latency-svc-klq7g [748.628396ms] Apr 3 14:19:02.837: INFO: Created: latency-svc-4v5gr Apr 3 14:19:02.843: INFO: Got endpoints: latency-svc-4v5gr [725.728233ms] Apr 3 14:19:02.868: INFO: Created: latency-svc-g9tfz Apr 3 14:19:02.885: INFO: Got endpoints: latency-svc-g9tfz [718.32624ms] Apr 3 14:19:02.906: INFO: Created: latency-svc-2hd4j Apr 3 14:19:02.927: INFO: Got endpoints: latency-svc-2hd4j [699.869111ms] Apr 3 14:19:02.972: INFO: Created: latency-svc-jzrkj Apr 3 14:19:02.981: INFO: Got endpoints: latency-svc-jzrkj [712.160889ms] Apr 3 14:19:03.018: INFO: Created: latency-svc-dh8kk Apr 3 14:19:03.030: INFO: Got endpoints: latency-svc-dh8kk [729.770269ms] Apr 3 14:19:03.055: INFO: Created: latency-svc-jk925 Apr 3 14:19:03.067: INFO: Got endpoints: latency-svc-jk925 [698.756047ms] Apr 3 14:19:03.117: INFO: Created: latency-svc-zm4jx Apr 3 14:19:03.120: INFO: Got endpoints: latency-svc-zm4jx [716.844876ms] Apr 3 14:19:03.170: INFO: Created: latency-svc-gf2cp Apr 3 14:19:03.181: INFO: Got endpoints: latency-svc-gf2cp [726.977444ms] Apr 3 14:19:03.204: INFO: Created: latency-svc-dpp59 Apr 3 14:19:03.266: INFO: Got endpoints: latency-svc-dpp59 [726.904281ms] Apr 3 14:19:03.269: INFO: Created: latency-svc-72n8d Apr 3 14:19:03.277: INFO: Got endpoints: latency-svc-72n8d [700.393887ms] Apr 3 14:19:03.300: INFO: Created: latency-svc-zh4bx Apr 3 14:19:03.314: INFO: Got endpoints: latency-svc-zh4bx [699.937718ms] Apr 3 14:19:03.344: INFO: Created: latency-svc-phqng Apr 3 14:19:03.423: INFO: Got endpoints: latency-svc-phqng [716.651975ms] Apr 3 14:19:03.424: INFO: Created: latency-svc-rgmf7 Apr 3 14:19:03.434: INFO: Got endpoints: latency-svc-rgmf7 [712.034464ms] Apr 3 14:19:03.474: INFO: Created: latency-svc-bp2q2 Apr 3 14:19:03.488: INFO: Got endpoints: latency-svc-bp2q2 [729.715388ms] Apr 3 14:19:03.522: INFO: Created: latency-svc-k7gmp Apr 3 14:19:03.596: INFO: Got endpoints: latency-svc-k7gmp [760.237712ms] Apr 3 14:19:03.598: INFO: Created: latency-svc-6n6mf Apr 3 14:19:03.603: INFO: Got endpoints: latency-svc-6n6mf [760.130679ms] Apr 3 14:19:03.642: INFO: Created: latency-svc-98b68 Apr 3 14:19:03.657: INFO: Got endpoints: latency-svc-98b68 [771.880103ms] Apr 3 14:19:03.684: INFO: Created: latency-svc-cv6kf Apr 3 14:19:03.763: INFO: Got endpoints: latency-svc-cv6kf [836.010474ms] Apr 3 14:19:03.766: INFO: Created: latency-svc-6twjm Apr 3 14:19:03.778: INFO: Got endpoints: latency-svc-6twjm [796.404578ms] Apr 3 14:19:03.806: INFO: Created: latency-svc-mzfcl Apr 3 14:19:03.814: INFO: Got endpoints: latency-svc-mzfcl [783.961625ms] Apr 3 14:19:03.836: INFO: Created: latency-svc-ctjvx Apr 3 14:19:03.844: INFO: Got endpoints: latency-svc-ctjvx [777.349818ms] Apr 3 14:19:03.902: INFO: Created: latency-svc-8nbc5 Apr 3 14:19:03.905: INFO: Got endpoints: latency-svc-8nbc5 [784.568393ms] Apr 3 14:19:03.930: INFO: Created: latency-svc-kmft6 Apr 3 14:19:03.947: INFO: Got endpoints: latency-svc-kmft6 [766.150958ms] Apr 3 14:19:03.967: INFO: Created: latency-svc-hn954 Apr 3 14:19:04.033: INFO: Got endpoints: latency-svc-hn954 [766.463294ms] Apr 3 14:19:04.058: INFO: Created: latency-svc-j4htz Apr 3 14:19:04.074: INFO: Got endpoints: latency-svc-j4htz [796.378146ms] Apr 3 14:19:04.092: INFO: Created: latency-svc-m7wxg Apr 3 14:19:04.104: INFO: Got endpoints: latency-svc-m7wxg [790.047686ms] Apr 3 14:19:04.122: INFO: Created: latency-svc-2lt9z Apr 3 14:19:04.171: INFO: Got endpoints: latency-svc-2lt9z [748.396972ms] Apr 3 14:19:04.208: INFO: Created: latency-svc-zgc4b Apr 3 14:19:04.218: INFO: Got endpoints: latency-svc-zgc4b [783.90782ms] Apr 3 14:19:04.243: INFO: Created: latency-svc-2ldxt Apr 3 14:19:04.255: INFO: Got endpoints: latency-svc-2ldxt [766.33645ms] Apr 3 14:19:04.334: INFO: Created: latency-svc-qv6r2 Apr 3 14:19:04.335: INFO: Got endpoints: latency-svc-qv6r2 [739.660438ms] Apr 3 14:19:04.363: INFO: Created: latency-svc-klmxf Apr 3 14:19:04.375: INFO: Got endpoints: latency-svc-klmxf [772.331265ms] Apr 3 14:19:04.400: INFO: Created: latency-svc-8lzjv Apr 3 14:19:04.411: INFO: Got endpoints: latency-svc-8lzjv [754.124888ms] Apr 3 14:19:04.495: INFO: Created: latency-svc-54xzg Apr 3 14:19:04.498: INFO: Got endpoints: latency-svc-54xzg [734.056344ms] Apr 3 14:19:04.554: INFO: Created: latency-svc-cwvcj Apr 3 14:19:04.568: INFO: Got endpoints: latency-svc-cwvcj [790.024003ms] Apr 3 14:19:04.585: INFO: Created: latency-svc-qjkmw Apr 3 14:19:04.625: INFO: Got endpoints: latency-svc-qjkmw [811.389325ms] Apr 3 14:19:04.639: INFO: Created: latency-svc-5pd8k Apr 3 14:19:04.654: INFO: Got endpoints: latency-svc-5pd8k [809.579058ms] Apr 3 14:19:04.675: INFO: Created: latency-svc-wx6dk Apr 3 14:19:04.711: INFO: Got endpoints: latency-svc-wx6dk [806.67275ms] Apr 3 14:19:04.758: INFO: Created: latency-svc-qvvmc Apr 3 14:19:04.782: INFO: Got endpoints: latency-svc-qvvmc [834.430318ms] Apr 3 14:19:04.783: INFO: Created: latency-svc-5l8mg Apr 3 14:19:04.798: INFO: Got endpoints: latency-svc-5l8mg [764.820549ms] Apr 3 14:19:04.819: INFO: Created: latency-svc-4c6ct Apr 3 14:19:04.831: INFO: Got endpoints: latency-svc-4c6ct [756.863855ms] Apr 3 14:19:04.895: INFO: Created: latency-svc-92s9b Apr 3 14:19:04.926: INFO: Got endpoints: latency-svc-92s9b [821.403863ms] Apr 3 14:19:04.926: INFO: Created: latency-svc-v9gcr Apr 3 14:19:04.946: INFO: Got endpoints: latency-svc-v9gcr [774.752627ms] Apr 3 14:19:04.968: INFO: Created: latency-svc-jt5bl Apr 3 14:19:04.976: INFO: Got endpoints: latency-svc-jt5bl [757.412757ms] Apr 3 14:19:05.033: INFO: Created: latency-svc-wzjz4 Apr 3 14:19:05.036: INFO: Got endpoints: latency-svc-wzjz4 [780.995889ms] Apr 3 14:19:05.090: INFO: Created: latency-svc-pvftr Apr 3 14:19:05.102: INFO: Got endpoints: latency-svc-pvftr [767.009922ms] Apr 3 14:19:05.120: INFO: Created: latency-svc-gjbtk Apr 3 14:19:05.183: INFO: Got endpoints: latency-svc-gjbtk [807.460536ms] Apr 3 14:19:05.184: INFO: Created: latency-svc-g8qtp Apr 3 14:19:05.193: INFO: Got endpoints: latency-svc-g8qtp [781.157446ms] Apr 3 14:19:05.214: INFO: Created: latency-svc-xv5xv Apr 3 14:19:05.223: INFO: Got endpoints: latency-svc-xv5xv [725.630928ms] Apr 3 14:19:05.245: INFO: Created: latency-svc-95gxs Apr 3 14:19:05.260: INFO: Got endpoints: latency-svc-95gxs [691.580542ms] Apr 3 14:19:05.281: INFO: Created: latency-svc-5vjwp Apr 3 14:19:05.345: INFO: Got endpoints: latency-svc-5vjwp [719.690122ms] Apr 3 14:19:05.348: INFO: Created: latency-svc-cn7n7 Apr 3 14:19:05.356: INFO: Got endpoints: latency-svc-cn7n7 [702.020194ms] Apr 3 14:19:05.375: INFO: Created: latency-svc-pqlfj Apr 3 14:19:05.386: INFO: Got endpoints: latency-svc-pqlfj [675.0024ms] Apr 3 14:19:05.412: INFO: Created: latency-svc-vbg9z Apr 3 14:19:05.441: INFO: Got endpoints: latency-svc-vbg9z [659.021047ms] Apr 3 14:19:05.494: INFO: Created: latency-svc-jwr8j Apr 3 14:19:05.515: INFO: Got endpoints: latency-svc-jwr8j [717.588244ms] Apr 3 14:19:05.553: INFO: Created: latency-svc-jsnt5 Apr 3 14:19:05.567: INFO: Got endpoints: latency-svc-jsnt5 [736.170534ms] Apr 3 14:19:05.585: INFO: Created: latency-svc-x76vl Apr 3 14:19:05.647: INFO: Got endpoints: latency-svc-x76vl [721.435098ms] Apr 3 14:19:05.649: INFO: Created: latency-svc-h5dk9 Apr 3 14:19:05.657: INFO: Got endpoints: latency-svc-h5dk9 [711.419484ms] Apr 3 14:19:05.726: INFO: Created: latency-svc-lgl8s Apr 3 14:19:05.742: INFO: Got endpoints: latency-svc-lgl8s [766.108943ms] Apr 3 14:19:05.788: INFO: Created: latency-svc-7wcjl Apr 3 14:19:05.790: INFO: Got endpoints: latency-svc-7wcjl [754.700435ms] Apr 3 14:19:05.825: INFO: Created: latency-svc-lzm6h Apr 3 14:19:05.850: INFO: Got endpoints: latency-svc-lzm6h [747.940556ms] Apr 3 14:19:05.876: INFO: Created: latency-svc-tlpt7 Apr 3 14:19:05.925: INFO: Got endpoints: latency-svc-tlpt7 [742.213963ms] Apr 3 14:19:05.941: INFO: Created: latency-svc-vrd2h Apr 3 14:19:05.954: INFO: Got endpoints: latency-svc-vrd2h [760.923469ms] Apr 3 14:19:05.972: INFO: Created: latency-svc-89sdm Apr 3 14:19:05.983: INFO: Got endpoints: latency-svc-89sdm [760.126553ms] Apr 3 14:19:06.006: INFO: Created: latency-svc-zvbb9 Apr 3 14:19:06.075: INFO: Got endpoints: latency-svc-zvbb9 [815.18019ms] Apr 3 14:19:06.096: INFO: Created: latency-svc-lxkbt Apr 3 14:19:06.116: INFO: Got endpoints: latency-svc-lxkbt [770.698374ms] Apr 3 14:19:06.139: INFO: Created: latency-svc-76z2q Apr 3 14:19:06.152: INFO: Got endpoints: latency-svc-76z2q [795.838016ms] Apr 3 14:19:06.170: INFO: Created: latency-svc-9cf88 Apr 3 14:19:06.219: INFO: Got endpoints: latency-svc-9cf88 [832.300822ms] Apr 3 14:19:06.240: INFO: Created: latency-svc-pqnxf Apr 3 14:19:06.248: INFO: Got endpoints: latency-svc-pqnxf [807.402647ms] Apr 3 14:19:06.275: INFO: Created: latency-svc-z5xkt Apr 3 14:19:06.285: INFO: Got endpoints: latency-svc-z5xkt [769.135505ms] Apr 3 14:19:06.307: INFO: Created: latency-svc-tm9bw Apr 3 14:19:06.368: INFO: Got endpoints: latency-svc-tm9bw [801.036971ms] Apr 3 14:19:06.370: INFO: Created: latency-svc-dfxfw Apr 3 14:19:06.375: INFO: Got endpoints: latency-svc-dfxfw [728.226587ms] Apr 3 14:19:06.403: INFO: Created: latency-svc-sd5cf Apr 3 14:19:06.418: INFO: Got endpoints: latency-svc-sd5cf [760.251162ms] Apr 3 14:19:06.444: INFO: Created: latency-svc-dg7l4 Apr 3 14:19:06.454: INFO: Got endpoints: latency-svc-dg7l4 [712.309199ms] Apr 3 14:19:06.507: INFO: Created: latency-svc-9bhdg Apr 3 14:19:06.508: INFO: Got endpoints: latency-svc-9bhdg [717.826056ms] Apr 3 14:19:06.548: INFO: Created: latency-svc-2lc22 Apr 3 14:19:06.562: INFO: Got endpoints: latency-svc-2lc22 [711.820574ms] Apr 3 14:19:06.590: INFO: Created: latency-svc-ks58f Apr 3 14:19:06.632: INFO: Got endpoints: latency-svc-ks58f [706.642619ms] Apr 3 14:19:06.647: INFO: Created: latency-svc-d2jq2 Apr 3 14:19:06.677: INFO: Got endpoints: latency-svc-d2jq2 [723.547609ms] Apr 3 14:19:06.711: INFO: Created: latency-svc-89s46 Apr 3 14:19:06.719: INFO: Got endpoints: latency-svc-89s46 [735.560947ms] Apr 3 14:19:06.770: INFO: Created: latency-svc-4x9cl Apr 3 14:19:06.774: INFO: Got endpoints: latency-svc-4x9cl [698.948683ms] Apr 3 14:19:06.793: INFO: Created: latency-svc-7jxm8 Apr 3 14:19:06.810: INFO: Got endpoints: latency-svc-7jxm8 [694.099806ms] Apr 3 14:19:06.829: INFO: Created: latency-svc-qwwqp Apr 3 14:19:06.846: INFO: Got endpoints: latency-svc-qwwqp [694.478183ms] Apr 3 14:19:06.872: INFO: Created: latency-svc-t4bld Apr 3 14:19:06.931: INFO: Got endpoints: latency-svc-t4bld [712.327601ms] Apr 3 14:19:06.934: INFO: Created: latency-svc-sgtlj Apr 3 14:19:06.943: INFO: Got endpoints: latency-svc-sgtlj [694.288459ms] Apr 3 14:19:06.980: INFO: Created: latency-svc-kwmzv Apr 3 14:19:07.003: INFO: Got endpoints: latency-svc-kwmzv [718.232567ms] Apr 3 14:19:07.075: INFO: Created: latency-svc-btct7 Apr 3 14:19:07.082: INFO: Got endpoints: latency-svc-btct7 [713.551726ms] Apr 3 14:19:07.110: INFO: Created: latency-svc-92k5z Apr 3 14:19:07.123: INFO: Got endpoints: latency-svc-92k5z [748.092091ms] Apr 3 14:19:07.160: INFO: Created: latency-svc-5njks Apr 3 14:19:07.171: INFO: Got endpoints: latency-svc-5njks [753.640806ms] Apr 3 14:19:07.171: INFO: Latencies: [54.073055ms 107.147389ms 108.32738ms 138.638593ms 169.029372ms 221.91475ms 247.652758ms 314.119565ms 403.774641ms 441.25944ms 506.287788ms 584.935244ms 624.051712ms 659.021047ms 663.418052ms 675.0024ms 675.296372ms 677.800103ms 691.580542ms 692.198905ms 693.671322ms 694.099806ms 694.288459ms 694.478183ms 698.756047ms 698.948683ms 699.869111ms 699.937718ms 700.393887ms 702.020194ms 705.321942ms 705.407053ms 706.192219ms 706.642619ms 711.419484ms 711.820574ms 712.034464ms 712.160889ms 712.309199ms 712.327601ms 713.551726ms 716.16851ms 716.651975ms 716.844876ms 717.586237ms 717.588244ms 717.826056ms 718.232567ms 718.32624ms 718.463712ms 719.690122ms 721.435098ms 723.547609ms 724.025573ms 725.630928ms 725.728233ms 726.904281ms 726.977444ms 728.226587ms 728.398627ms 729.715388ms 729.770269ms 730.105603ms 734.056344ms 734.641216ms 735.560947ms 735.589747ms 735.703287ms 735.980477ms 736.170534ms 736.175169ms 737.548851ms 739.660438ms 741.02886ms 741.969068ms 742.213963ms 742.334378ms 742.393097ms 743.459882ms 744.154826ms 747.34253ms 747.478429ms 747.489727ms 747.886714ms 747.940556ms 748.092091ms 748.30114ms 748.396972ms 748.628396ms 749.640612ms 749.997089ms 752.136035ms 753.288418ms 753.640806ms 754.124888ms 754.700435ms 756.863855ms 757.054638ms 757.331283ms 757.412757ms 759.436971ms 760.126553ms 760.130679ms 760.237712ms 760.251162ms 760.923469ms 762.560515ms 762.646465ms 763.934469ms 764.750276ms 764.820549ms 766.108943ms 766.150958ms 766.33645ms 766.463294ms 767.009922ms 769.135505ms 770.303516ms 770.698374ms 770.810301ms 771.275334ms 771.880103ms 772.221184ms 772.331265ms 772.478222ms 773.462405ms 774.752627ms 776.476235ms 777.177628ms 777.349818ms 777.879108ms 778.478041ms 778.598961ms 780.995889ms 781.157446ms 781.870288ms 782.972889ms 783.042412ms 783.829748ms 783.90782ms 783.961625ms 784.568393ms 784.91354ms 786.603766ms 788.783481ms 789.579915ms 789.768235ms 790.024003ms 790.047686ms 791.770643ms 793.795976ms 794.937992ms 795.838016ms 796.145718ms 796.378146ms 796.404578ms 796.871429ms 797.021211ms 798.059799ms 801.036971ms 802.114028ms 802.60721ms 803.499456ms 804.932417ms 805.031886ms 805.925189ms 806.67275ms 807.402647ms 807.460536ms 808.724995ms 809.579058ms 810.868189ms 811.389325ms 812.794174ms 813.863101ms 815.05728ms 815.18019ms 817.855754ms 820.053559ms 821.403863ms 825.464203ms 827.736044ms 832.300822ms 834.430318ms 836.010474ms 837.767156ms 840.244726ms 840.68035ms 841.933642ms 846.720792ms 847.459187ms 858.929268ms 861.989543ms 872.484756ms 874.606577ms 886.261929ms 891.554385ms 892.138256ms 919.853767ms 921.48317ms] Apr 3 14:19:07.172: INFO: 50 %ile: 759.436971ms Apr 3 14:19:07.172: INFO: 90 %ile: 825.464203ms Apr 3 14:19:07.172: INFO: 99 %ile: 919.853767ms Apr 3 14:19:07.172: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:19:07.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-6282" for this suite. Apr 3 14:19:29.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:19:29.307: INFO: namespace svc-latency-6282 deletion completed in 22.088047913s • [SLOW TEST:35.534 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:19:29.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 14:19:29.348: INFO: Creating deployment "test-recreate-deployment" Apr 3 14:19:29.367: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Apr 3 14:19:29.388: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Apr 3 14:19:31.439: INFO: Waiting deployment "test-recreate-deployment" to complete Apr 3 14:19:31.442: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721520369, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721520369, loc:(*time.Location)(0x7ea78c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63721520369, loc:(*time.Location)(0x7ea78c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63721520369, loc:(*time.Location)(0x7ea78c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Apr 3 14:19:33.446: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Apr 3 14:19:33.495: INFO: Updating deployment test-recreate-deployment Apr 3 14:19:33.495: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Apr 3 14:19:33.813: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-7630,SelfLink:/apis/apps/v1/namespaces/deployment-7630/deployments/test-recreate-deployment,UID:60753cb0-0e7c-423d-b548-b51c9eb9d6e7,ResourceVersion:3408647,Generation:2,CreationTimestamp:2020-04-03 14:19:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-04-03 14:19:33 +0000 UTC 2020-04-03 14:19:33 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-04-03 14:19:33 +0000 UTC 2020-04-03 14:19:29 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Apr 3 14:19:33.843: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-7630,SelfLink:/apis/apps/v1/namespaces/deployment-7630/replicasets/test-recreate-deployment-5c8c9cc69d,UID:69b3836c-0623-4e26-8d63-02fa0e114651,ResourceVersion:3408646,Generation:1,CreationTimestamp:2020-04-03 14:19:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 60753cb0-0e7c-423d-b548-b51c9eb9d6e7 0xc0031938a7 0xc0031938a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 3 14:19:33.843: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Apr 3 14:19:33.843: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-7630,SelfLink:/apis/apps/v1/namespaces/deployment-7630/replicasets/test-recreate-deployment-6df85df6b9,UID:da10afb5-e9da-46e6-b498-f4effa651e07,ResourceVersion:3408637,Generation:2,CreationTimestamp:2020-04-03 14:19:29 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 60753cb0-0e7c-423d-b548-b51c9eb9d6e7 0xc003193977 0xc003193978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Apr 3 14:19:33.848: INFO: Pod "test-recreate-deployment-5c8c9cc69d-pktzr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-pktzr,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-7630,SelfLink:/api/v1/namespaces/deployment-7630/pods/test-recreate-deployment-5c8c9cc69d-pktzr,UID:9cf40cdb-418c-4a8f-a5e4-e39140f6b174,ResourceVersion:3408648,Generation:0,CreationTimestamp:2020-04-03 14:19:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 69b3836c-0623-4e26-8d63-02fa0e114651 0xc0037d4257 0xc0037d4258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-5tv8j {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-5tv8j,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-5tv8j true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0037d42d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0037d42f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:19:33 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:19:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:19:33 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:19:33 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-04-03 14:19:33 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:19:33.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7630" for this suite. Apr 3 14:19:40.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:19:40.080: INFO: namespace deployment-7630 deletion completed in 6.206083192s • [SLOW TEST:10.773 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:19:40.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 3 14:19:48.183: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 3 14:19:48.191: INFO: Pod pod-with-poststart-exec-hook still exists Apr 3 14:19:50.191: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 3 14:19:50.195: INFO: Pod pod-with-poststart-exec-hook still exists Apr 3 14:19:52.191: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 3 14:19:52.195: INFO: Pod pod-with-poststart-exec-hook still exists Apr 3 14:19:54.191: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 3 14:19:54.195: INFO: Pod pod-with-poststart-exec-hook still exists Apr 3 14:19:56.191: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 3 14:19:56.195: INFO: Pod pod-with-poststart-exec-hook still exists Apr 3 14:19:58.191: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 3 14:19:58.195: INFO: Pod pod-with-poststart-exec-hook still exists Apr 3 14:20:00.191: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 3 14:20:00.195: INFO: Pod pod-with-poststart-exec-hook still exists Apr 3 14:20:02.191: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 3 14:20:02.194: INFO: Pod pod-with-poststart-exec-hook still exists Apr 3 14:20:04.191: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 3 14:20:04.195: INFO: Pod pod-with-poststart-exec-hook still exists Apr 3 14:20:06.191: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 3 14:20:06.195: INFO: Pod pod-with-poststart-exec-hook still exists Apr 3 14:20:08.191: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 3 14:20:08.195: INFO: Pod pod-with-poststart-exec-hook still exists Apr 3 14:20:10.191: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 3 14:20:10.194: INFO: Pod pod-with-poststart-exec-hook still exists Apr 3 14:20:12.191: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Apr 3 14:20:12.195: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:20:12.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3981" for this suite. Apr 3 14:20:34.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:20:34.292: INFO: namespace container-lifecycle-hook-3981 deletion completed in 22.094197441s • [SLOW TEST:54.212 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:20:34.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0403 14:20:35.439249 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 3 14:20:35.439: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:20:35.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7266" for this suite. Apr 3 14:20:41.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:20:41.561: INFO: namespace gc-7266 deletion completed in 6.1185474s • [SLOW TEST:7.269 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:20:41.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container Apr 3 14:20:45.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-f634443d-ece3-41f5-86df-24a5e4b83c05 -c busybox-main-container --namespace=emptydir-8549 -- cat /usr/share/volumeshare/shareddata.txt' Apr 3 14:20:45.843: INFO: stderr: "I0403 14:20:45.775859 2227 log.go:172] (0xc000116dc0) (0xc00060eaa0) Create stream\nI0403 14:20:45.775925 2227 log.go:172] (0xc000116dc0) (0xc00060eaa0) Stream added, broadcasting: 1\nI0403 14:20:45.777967 2227 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0403 14:20:45.778005 2227 log.go:172] (0xc000116dc0) (0xc00055c000) Create stream\nI0403 14:20:45.778025 2227 log.go:172] (0xc000116dc0) (0xc00055c000) Stream added, broadcasting: 3\nI0403 14:20:45.778637 2227 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0403 14:20:45.778667 2227 log.go:172] (0xc000116dc0) (0xc00060eb40) Create stream\nI0403 14:20:45.778676 2227 log.go:172] (0xc000116dc0) (0xc00060eb40) Stream added, broadcasting: 5\nI0403 14:20:45.779277 2227 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0403 14:20:45.837864 2227 log.go:172] (0xc000116dc0) Data frame received for 3\nI0403 14:20:45.837908 2227 log.go:172] (0xc000116dc0) Data frame received for 5\nI0403 14:20:45.837946 2227 log.go:172] (0xc00060eb40) (5) Data frame handling\nI0403 14:20:45.837976 2227 log.go:172] (0xc00055c000) (3) Data frame handling\nI0403 14:20:45.837990 2227 log.go:172] (0xc00055c000) (3) Data frame sent\nI0403 14:20:45.838001 2227 log.go:172] (0xc000116dc0) Data frame received for 3\nI0403 14:20:45.838016 2227 log.go:172] (0xc00055c000) (3) Data frame handling\nI0403 14:20:45.839449 2227 log.go:172] (0xc000116dc0) Data frame received for 1\nI0403 14:20:45.839502 2227 log.go:172] (0xc00060eaa0) (1) Data frame handling\nI0403 14:20:45.839542 2227 log.go:172] (0xc00060eaa0) (1) Data frame sent\nI0403 14:20:45.839573 2227 log.go:172] (0xc000116dc0) (0xc00060eaa0) Stream removed, broadcasting: 1\nI0403 14:20:45.839596 2227 log.go:172] (0xc000116dc0) Go away received\nI0403 14:20:45.839826 2227 log.go:172] (0xc000116dc0) (0xc00060eaa0) Stream removed, broadcasting: 1\nI0403 14:20:45.839839 2227 log.go:172] (0xc000116dc0) (0xc00055c000) Stream removed, broadcasting: 3\nI0403 14:20:45.839844 2227 log.go:172] (0xc000116dc0) (0xc00060eb40) Stream removed, broadcasting: 5\n" Apr 3 14:20:45.843: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:20:45.843: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8549" for this suite. Apr 3 14:20:51.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:20:51.946: INFO: namespace emptydir-8549 deletion completed in 6.090876143s • [SLOW TEST:10.385 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:20:51.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Apr 3 14:20:56.036: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:20:56.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-4865" for this suite. Apr 3 14:21:02.140: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:21:02.216: INFO: namespace container-runtime-4865 deletion completed in 6.090321845s • [SLOW TEST:10.269 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:21:02.216: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-8919 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-8919 STEP: Deleting pre-stop pod Apr 3 14:21:15.331: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:21:15.337: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-8919" for this suite. Apr 3 14:21:53.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:21:53.493: INFO: namespace prestop-8919 deletion completed in 38.103736961s • [SLOW TEST:51.277 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:21:53.493: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:21:59.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6205" for this suite. Apr 3 14:22:05.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:22:05.233: INFO: namespace watch-6205 deletion completed in 6.183478318s • [SLOW TEST:11.740 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:22:05.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Apr 3 14:22:05.309: INFO: Waiting up to 5m0s for pod "pod-d7078957-5f7c-4bc4-9338-b4d1863a73c8" in namespace "emptydir-1133" to be "success or failure" Apr 3 14:22:05.313: INFO: Pod "pod-d7078957-5f7c-4bc4-9338-b4d1863a73c8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.812381ms Apr 3 14:22:07.316: INFO: Pod "pod-d7078957-5f7c-4bc4-9338-b4d1863a73c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007139166s Apr 3 14:22:09.321: INFO: Pod "pod-d7078957-5f7c-4bc4-9338-b4d1863a73c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012025838s STEP: Saw pod success Apr 3 14:22:09.321: INFO: Pod "pod-d7078957-5f7c-4bc4-9338-b4d1863a73c8" satisfied condition "success or failure" Apr 3 14:22:09.325: INFO: Trying to get logs from node iruya-worker pod pod-d7078957-5f7c-4bc4-9338-b4d1863a73c8 container test-container: STEP: delete the pod Apr 3 14:22:09.344: INFO: Waiting for pod pod-d7078957-5f7c-4bc4-9338-b4d1863a73c8 to disappear Apr 3 14:22:09.348: INFO: Pod pod-d7078957-5f7c-4bc4-9338-b4d1863a73c8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:22:09.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1133" for this suite. Apr 3 14:22:15.358: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:22:15.430: INFO: namespace emptydir-1133 deletion completed in 6.078815452s • [SLOW TEST:10.196 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:22:15.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium Apr 3 14:22:15.503: INFO: Waiting up to 5m0s for pod "pod-2b21c854-86fb-4840-8dd6-2fc871d8f9ad" in namespace "emptydir-4789" to be "success or failure" Apr 3 14:22:15.511: INFO: Pod "pod-2b21c854-86fb-4840-8dd6-2fc871d8f9ad": Phase="Pending", Reason="", readiness=false. Elapsed: 7.703865ms Apr 3 14:22:17.515: INFO: Pod "pod-2b21c854-86fb-4840-8dd6-2fc871d8f9ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011948931s Apr 3 14:22:19.519: INFO: Pod "pod-2b21c854-86fb-4840-8dd6-2fc871d8f9ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.015744733s STEP: Saw pod success Apr 3 14:22:19.519: INFO: Pod "pod-2b21c854-86fb-4840-8dd6-2fc871d8f9ad" satisfied condition "success or failure" Apr 3 14:22:19.522: INFO: Trying to get logs from node iruya-worker2 pod pod-2b21c854-86fb-4840-8dd6-2fc871d8f9ad container test-container: STEP: delete the pod Apr 3 14:22:19.555: INFO: Waiting for pod pod-2b21c854-86fb-4840-8dd6-2fc871d8f9ad to disappear Apr 3 14:22:19.565: INFO: Pod pod-2b21c854-86fb-4840-8dd6-2fc871d8f9ad no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:22:19.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4789" for this suite. Apr 3 14:22:25.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:22:25.654: INFO: namespace emptydir-4789 deletion completed in 6.085291262s • [SLOW TEST:10.223 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:22:25.654: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-4067/secret-test-f837648e-8953-41b2-8995-9d1cff6b81fb STEP: Creating a pod to test consume secrets Apr 3 14:22:25.732: INFO: Waiting up to 5m0s for pod "pod-configmaps-12ac7ada-8565-4224-b36a-bc9ad77ad59e" in namespace "secrets-4067" to be "success or failure" Apr 3 14:22:25.736: INFO: Pod "pod-configmaps-12ac7ada-8565-4224-b36a-bc9ad77ad59e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.298733ms Apr 3 14:22:27.740: INFO: Pod "pod-configmaps-12ac7ada-8565-4224-b36a-bc9ad77ad59e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008541825s Apr 3 14:22:29.744: INFO: Pod "pod-configmaps-12ac7ada-8565-4224-b36a-bc9ad77ad59e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012526608s STEP: Saw pod success Apr 3 14:22:29.744: INFO: Pod "pod-configmaps-12ac7ada-8565-4224-b36a-bc9ad77ad59e" satisfied condition "success or failure" Apr 3 14:22:29.747: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-12ac7ada-8565-4224-b36a-bc9ad77ad59e container env-test: STEP: delete the pod Apr 3 14:22:29.780: INFO: Waiting for pod pod-configmaps-12ac7ada-8565-4224-b36a-bc9ad77ad59e to disappear Apr 3 14:22:29.796: INFO: Pod pod-configmaps-12ac7ada-8565-4224-b36a-bc9ad77ad59e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:22:29.796: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4067" for this suite. Apr 3 14:22:35.826: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:22:35.909: INFO: namespace secrets-4067 deletion completed in 6.109417818s • [SLOW TEST:10.255 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:22:35.910: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3555 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-3555 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3555 Apr 3 14:22:36.015: INFO: Found 0 stateful pods, waiting for 1 Apr 3 14:22:46.019: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Apr 3 14:22:46.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 3 14:22:46.290: INFO: stderr: "I0403 14:22:46.165536 2248 log.go:172] (0xc000a7a630) (0xc00068eb40) Create stream\nI0403 14:22:46.165586 2248 log.go:172] (0xc000a7a630) (0xc00068eb40) Stream added, broadcasting: 1\nI0403 14:22:46.167343 2248 log.go:172] (0xc000a7a630) Reply frame received for 1\nI0403 14:22:46.167375 2248 log.go:172] (0xc000a7a630) (0xc00068ebe0) Create stream\nI0403 14:22:46.167383 2248 log.go:172] (0xc000a7a630) (0xc00068ebe0) Stream added, broadcasting: 3\nI0403 14:22:46.168607 2248 log.go:172] (0xc000a7a630) Reply frame received for 3\nI0403 14:22:46.168835 2248 log.go:172] (0xc000a7a630) (0xc000a54000) Create stream\nI0403 14:22:46.168954 2248 log.go:172] (0xc000a7a630) (0xc000a54000) Stream added, broadcasting: 5\nI0403 14:22:46.170519 2248 log.go:172] (0xc000a7a630) Reply frame received for 5\nI0403 14:22:46.259535 2248 log.go:172] (0xc000a7a630) Data frame received for 5\nI0403 14:22:46.259572 2248 log.go:172] (0xc000a54000) (5) Data frame handling\nI0403 14:22:46.259594 2248 log.go:172] (0xc000a54000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0403 14:22:46.283595 2248 log.go:172] (0xc000a7a630) Data frame received for 3\nI0403 14:22:46.283642 2248 log.go:172] (0xc00068ebe0) (3) Data frame handling\nI0403 14:22:46.283672 2248 log.go:172] (0xc00068ebe0) (3) Data frame sent\nI0403 14:22:46.283738 2248 log.go:172] (0xc000a7a630) Data frame received for 3\nI0403 14:22:46.283758 2248 log.go:172] (0xc00068ebe0) (3) Data frame handling\nI0403 14:22:46.283817 2248 log.go:172] (0xc000a7a630) Data frame received for 5\nI0403 14:22:46.283846 2248 log.go:172] (0xc000a54000) (5) Data frame handling\nI0403 14:22:46.285567 2248 log.go:172] (0xc000a7a630) Data frame received for 1\nI0403 14:22:46.285586 2248 log.go:172] (0xc00068eb40) (1) Data frame handling\nI0403 14:22:46.285599 2248 log.go:172] (0xc00068eb40) (1) Data frame sent\nI0403 14:22:46.285616 2248 log.go:172] (0xc000a7a630) (0xc00068eb40) Stream removed, broadcasting: 1\nI0403 14:22:46.285631 2248 log.go:172] (0xc000a7a630) Go away received\nI0403 14:22:46.286111 2248 log.go:172] (0xc000a7a630) (0xc00068eb40) Stream removed, broadcasting: 1\nI0403 14:22:46.286133 2248 log.go:172] (0xc000a7a630) (0xc00068ebe0) Stream removed, broadcasting: 3\nI0403 14:22:46.286145 2248 log.go:172] (0xc000a7a630) (0xc000a54000) Stream removed, broadcasting: 5\n" Apr 3 14:22:46.290: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 3 14:22:46.290: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 3 14:22:46.294: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Apr 3 14:22:56.299: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 3 14:22:56.299: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 14:22:56.320: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 14:22:56.320: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:46 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC }] Apr 3 14:22:56.320: INFO: Apr 3 14:22:56.320: INFO: StatefulSet ss has not reached scale 3, at 1 Apr 3 14:22:57.342: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993924072s Apr 3 14:22:58.432: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972250628s Apr 3 14:22:59.435: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.882124878s Apr 3 14:23:00.456: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.878582536s Apr 3 14:23:01.460: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.858352492s Apr 3 14:23:02.465: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.853710556s Apr 3 14:23:03.470: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.848816309s Apr 3 14:23:04.475: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.843777781s Apr 3 14:23:05.480: INFO: Verifying statefulset ss doesn't scale past 3 for another 839.245842ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3555 Apr 3 14:23:06.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:23:06.719: INFO: stderr: "I0403 14:23:06.619834 2268 log.go:172] (0xc0006fcc60) (0xc00067abe0) Create stream\nI0403 14:23:06.619887 2268 log.go:172] (0xc0006fcc60) (0xc00067abe0) Stream added, broadcasting: 1\nI0403 14:23:06.623058 2268 log.go:172] (0xc0006fcc60) Reply frame received for 1\nI0403 14:23:06.623105 2268 log.go:172] (0xc0006fcc60) (0xc00067a320) Create stream\nI0403 14:23:06.623123 2268 log.go:172] (0xc0006fcc60) (0xc00067a320) Stream added, broadcasting: 3\nI0403 14:23:06.624171 2268 log.go:172] (0xc0006fcc60) Reply frame received for 3\nI0403 14:23:06.624198 2268 log.go:172] (0xc0006fcc60) (0xc00067a3c0) Create stream\nI0403 14:23:06.624207 2268 log.go:172] (0xc0006fcc60) (0xc00067a3c0) Stream added, broadcasting: 5\nI0403 14:23:06.625033 2268 log.go:172] (0xc0006fcc60) Reply frame received for 5\nI0403 14:23:06.709466 2268 log.go:172] (0xc0006fcc60) Data frame received for 5\nI0403 14:23:06.709487 2268 log.go:172] (0xc00067a3c0) (5) Data frame handling\nI0403 14:23:06.709498 2268 log.go:172] (0xc00067a3c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0403 14:23:06.715104 2268 log.go:172] (0xc0006fcc60) Data frame received for 5\nI0403 14:23:06.715124 2268 log.go:172] (0xc00067a3c0) (5) Data frame handling\nI0403 14:23:06.715160 2268 log.go:172] (0xc0006fcc60) Data frame received for 3\nI0403 14:23:06.715170 2268 log.go:172] (0xc00067a320) (3) Data frame handling\nI0403 14:23:06.715183 2268 log.go:172] (0xc00067a320) (3) Data frame sent\nI0403 14:23:06.715200 2268 log.go:172] (0xc0006fcc60) Data frame received for 3\nI0403 14:23:06.715210 2268 log.go:172] (0xc00067a320) (3) Data frame handling\nI0403 14:23:06.716225 2268 log.go:172] (0xc0006fcc60) Data frame received for 1\nI0403 14:23:06.716237 2268 log.go:172] (0xc00067abe0) (1) Data frame handling\nI0403 14:23:06.716242 2268 log.go:172] (0xc00067abe0) (1) Data frame sent\nI0403 14:23:06.716251 2268 log.go:172] (0xc0006fcc60) (0xc00067abe0) Stream removed, broadcasting: 1\nI0403 14:23:06.716258 2268 log.go:172] (0xc0006fcc60) Go away received\nI0403 14:23:06.716486 2268 log.go:172] (0xc0006fcc60) (0xc00067abe0) Stream removed, broadcasting: 1\nI0403 14:23:06.716506 2268 log.go:172] (0xc0006fcc60) (0xc00067a320) Stream removed, broadcasting: 3\nI0403 14:23:06.716513 2268 log.go:172] (0xc0006fcc60) (0xc00067a3c0) Stream removed, broadcasting: 5\n" Apr 3 14:23:06.720: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 3 14:23:06.720: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 3 14:23:06.720: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:23:06.916: INFO: stderr: "I0403 14:23:06.853284 2289 log.go:172] (0xc00063a8f0) (0xc00065b040) Create stream\nI0403 14:23:06.853338 2289 log.go:172] (0xc00063a8f0) (0xc00065b040) Stream added, broadcasting: 1\nI0403 14:23:06.856813 2289 log.go:172] (0xc00063a8f0) Reply frame received for 1\nI0403 14:23:06.856850 2289 log.go:172] (0xc00063a8f0) (0xc00065a3c0) Create stream\nI0403 14:23:06.856868 2289 log.go:172] (0xc00063a8f0) (0xc00065a3c0) Stream added, broadcasting: 3\nI0403 14:23:06.857701 2289 log.go:172] (0xc00063a8f0) Reply frame received for 3\nI0403 14:23:06.857726 2289 log.go:172] (0xc00063a8f0) (0xc000691c20) Create stream\nI0403 14:23:06.857734 2289 log.go:172] (0xc00063a8f0) (0xc000691c20) Stream added, broadcasting: 5\nI0403 14:23:06.858439 2289 log.go:172] (0xc00063a8f0) Reply frame received for 5\nI0403 14:23:06.910474 2289 log.go:172] (0xc00063a8f0) Data frame received for 3\nI0403 14:23:06.910516 2289 log.go:172] (0xc00065a3c0) (3) Data frame handling\nI0403 14:23:06.910531 2289 log.go:172] (0xc00065a3c0) (3) Data frame sent\nI0403 14:23:06.910542 2289 log.go:172] (0xc00063a8f0) Data frame received for 3\nI0403 14:23:06.910549 2289 log.go:172] (0xc00065a3c0) (3) Data frame handling\nI0403 14:23:06.910589 2289 log.go:172] (0xc00063a8f0) Data frame received for 5\nI0403 14:23:06.910604 2289 log.go:172] (0xc000691c20) (5) Data frame handling\nI0403 14:23:06.910625 2289 log.go:172] (0xc000691c20) (5) Data frame sent\nI0403 14:23:06.910636 2289 log.go:172] (0xc00063a8f0) Data frame received for 5\nI0403 14:23:06.910646 2289 log.go:172] (0xc000691c20) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0403 14:23:06.910666 2289 log.go:172] (0xc000691c20) (5) Data frame sent\nI0403 14:23:06.910679 2289 log.go:172] (0xc00063a8f0) Data frame received for 5\nI0403 14:23:06.910695 2289 log.go:172] (0xc000691c20) (5) Data frame handling\nI0403 14:23:06.912301 2289 log.go:172] (0xc00063a8f0) Data frame received for 1\nI0403 14:23:06.912322 2289 log.go:172] (0xc00065b040) (1) Data frame handling\nI0403 14:23:06.912347 2289 log.go:172] (0xc00065b040) (1) Data frame sent\nI0403 14:23:06.912370 2289 log.go:172] (0xc00063a8f0) (0xc00065b040) Stream removed, broadcasting: 1\nI0403 14:23:06.912396 2289 log.go:172] (0xc00063a8f0) Go away received\nI0403 14:23:06.912643 2289 log.go:172] (0xc00063a8f0) (0xc00065b040) Stream removed, broadcasting: 1\nI0403 14:23:06.912660 2289 log.go:172] (0xc00063a8f0) (0xc00065a3c0) Stream removed, broadcasting: 3\nI0403 14:23:06.912668 2289 log.go:172] (0xc00063a8f0) (0xc000691c20) Stream removed, broadcasting: 5\n" Apr 3 14:23:06.917: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 3 14:23:06.917: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 3 14:23:06.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:23:07.120: INFO: stderr: "I0403 14:23:07.054854 2309 log.go:172] (0xc0009c4370) (0xc0007f2640) Create stream\nI0403 14:23:07.054916 2309 log.go:172] (0xc0009c4370) (0xc0007f2640) Stream added, broadcasting: 1\nI0403 14:23:07.057343 2309 log.go:172] (0xc0009c4370) Reply frame received for 1\nI0403 14:23:07.057378 2309 log.go:172] (0xc0009c4370) (0xc00069a280) Create stream\nI0403 14:23:07.057388 2309 log.go:172] (0xc0009c4370) (0xc00069a280) Stream added, broadcasting: 3\nI0403 14:23:07.058388 2309 log.go:172] (0xc0009c4370) Reply frame received for 3\nI0403 14:23:07.058418 2309 log.go:172] (0xc0009c4370) (0xc0007f26e0) Create stream\nI0403 14:23:07.058436 2309 log.go:172] (0xc0009c4370) (0xc0007f26e0) Stream added, broadcasting: 5\nI0403 14:23:07.059239 2309 log.go:172] (0xc0009c4370) Reply frame received for 5\nI0403 14:23:07.113488 2309 log.go:172] (0xc0009c4370) Data frame received for 5\nI0403 14:23:07.113531 2309 log.go:172] (0xc0007f26e0) (5) Data frame handling\nI0403 14:23:07.113562 2309 log.go:172] (0xc0007f26e0) (5) Data frame sent\nI0403 14:23:07.113584 2309 log.go:172] (0xc0009c4370) Data frame received for 5\nI0403 14:23:07.113595 2309 log.go:172] (0xc0007f26e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0403 14:23:07.113632 2309 log.go:172] (0xc0009c4370) Data frame received for 3\nI0403 14:23:07.113678 2309 log.go:172] (0xc00069a280) (3) Data frame handling\nI0403 14:23:07.113706 2309 log.go:172] (0xc00069a280) (3) Data frame sent\nI0403 14:23:07.113724 2309 log.go:172] (0xc0009c4370) Data frame received for 3\nI0403 14:23:07.113736 2309 log.go:172] (0xc00069a280) (3) Data frame handling\nI0403 14:23:07.115294 2309 log.go:172] (0xc0009c4370) Data frame received for 1\nI0403 14:23:07.115334 2309 log.go:172] (0xc0007f2640) (1) Data frame handling\nI0403 14:23:07.115375 2309 log.go:172] (0xc0007f2640) (1) Data frame sent\nI0403 14:23:07.115544 2309 log.go:172] (0xc0009c4370) (0xc0007f2640) Stream removed, broadcasting: 1\nI0403 14:23:07.115654 2309 log.go:172] (0xc0009c4370) Go away received\nI0403 14:23:07.115937 2309 log.go:172] (0xc0009c4370) (0xc0007f2640) Stream removed, broadcasting: 1\nI0403 14:23:07.115955 2309 log.go:172] (0xc0009c4370) (0xc00069a280) Stream removed, broadcasting: 3\nI0403 14:23:07.115966 2309 log.go:172] (0xc0009c4370) (0xc0007f26e0) Stream removed, broadcasting: 5\n" Apr 3 14:23:07.120: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Apr 3 14:23:07.120: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Apr 3 14:23:07.124: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Apr 3 14:23:17.132: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Apr 3 14:23:17.132: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Apr 3 14:23:17.132: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Apr 3 14:23:17.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 3 14:23:17.357: INFO: stderr: "I0403 14:23:17.254535 2329 log.go:172] (0xc000432420) (0xc0005c6820) Create stream\nI0403 14:23:17.254658 2329 log.go:172] (0xc000432420) (0xc0005c6820) Stream added, broadcasting: 1\nI0403 14:23:17.257088 2329 log.go:172] (0xc000432420) Reply frame received for 1\nI0403 14:23:17.257255 2329 log.go:172] (0xc000432420) (0xc0008ca000) Create stream\nI0403 14:23:17.257274 2329 log.go:172] (0xc000432420) (0xc0008ca000) Stream added, broadcasting: 3\nI0403 14:23:17.258296 2329 log.go:172] (0xc000432420) Reply frame received for 3\nI0403 14:23:17.258333 2329 log.go:172] (0xc000432420) (0xc0005c68c0) Create stream\nI0403 14:23:17.258351 2329 log.go:172] (0xc000432420) (0xc0005c68c0) Stream added, broadcasting: 5\nI0403 14:23:17.259243 2329 log.go:172] (0xc000432420) Reply frame received for 5\nI0403 14:23:17.350330 2329 log.go:172] (0xc000432420) Data frame received for 5\nI0403 14:23:17.350353 2329 log.go:172] (0xc0005c68c0) (5) Data frame handling\nI0403 14:23:17.350362 2329 log.go:172] (0xc0005c68c0) (5) Data frame sent\nI0403 14:23:17.350368 2329 log.go:172] (0xc000432420) Data frame received for 5\nI0403 14:23:17.350373 2329 log.go:172] (0xc0005c68c0) (5) Data frame handling\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0403 14:23:17.350492 2329 log.go:172] (0xc000432420) Data frame received for 3\nI0403 14:23:17.350536 2329 log.go:172] (0xc0008ca000) (3) Data frame handling\nI0403 14:23:17.350574 2329 log.go:172] (0xc0008ca000) (3) Data frame sent\nI0403 14:23:17.350598 2329 log.go:172] (0xc000432420) Data frame received for 3\nI0403 14:23:17.350615 2329 log.go:172] (0xc0008ca000) (3) Data frame handling\nI0403 14:23:17.352039 2329 log.go:172] (0xc000432420) Data frame received for 1\nI0403 14:23:17.352059 2329 log.go:172] (0xc0005c6820) (1) Data frame handling\nI0403 14:23:17.352075 2329 log.go:172] (0xc0005c6820) (1) Data frame sent\nI0403 14:23:17.352083 2329 log.go:172] (0xc000432420) (0xc0005c6820) Stream removed, broadcasting: 1\nI0403 14:23:17.352132 2329 log.go:172] (0xc000432420) Go away received\nI0403 14:23:17.352291 2329 log.go:172] (0xc000432420) (0xc0005c6820) Stream removed, broadcasting: 1\nI0403 14:23:17.352302 2329 log.go:172] (0xc000432420) (0xc0008ca000) Stream removed, broadcasting: 3\nI0403 14:23:17.352307 2329 log.go:172] (0xc000432420) (0xc0005c68c0) Stream removed, broadcasting: 5\n" Apr 3 14:23:17.357: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 3 14:23:17.357: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 3 14:23:17.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 3 14:23:17.590: INFO: stderr: "I0403 14:23:17.472364 2349 log.go:172] (0xc000836160) (0xc0002d8640) Create stream\nI0403 14:23:17.472414 2349 log.go:172] (0xc000836160) (0xc0002d8640) Stream added, broadcasting: 1\nI0403 14:23:17.474562 2349 log.go:172] (0xc000836160) Reply frame received for 1\nI0403 14:23:17.474604 2349 log.go:172] (0xc000836160) (0xc00018c320) Create stream\nI0403 14:23:17.474616 2349 log.go:172] (0xc000836160) (0xc00018c320) Stream added, broadcasting: 3\nI0403 14:23:17.475474 2349 log.go:172] (0xc000836160) Reply frame received for 3\nI0403 14:23:17.475511 2349 log.go:172] (0xc000836160) (0xc0002d86e0) Create stream\nI0403 14:23:17.475523 2349 log.go:172] (0xc000836160) (0xc0002d86e0) Stream added, broadcasting: 5\nI0403 14:23:17.476459 2349 log.go:172] (0xc000836160) Reply frame received for 5\nI0403 14:23:17.549011 2349 log.go:172] (0xc000836160) Data frame received for 5\nI0403 14:23:17.549030 2349 log.go:172] (0xc0002d86e0) (5) Data frame handling\nI0403 14:23:17.549038 2349 log.go:172] (0xc0002d86e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0403 14:23:17.582692 2349 log.go:172] (0xc000836160) Data frame received for 3\nI0403 14:23:17.582727 2349 log.go:172] (0xc00018c320) (3) Data frame handling\nI0403 14:23:17.582770 2349 log.go:172] (0xc00018c320) (3) Data frame sent\nI0403 14:23:17.583170 2349 log.go:172] (0xc000836160) Data frame received for 3\nI0403 14:23:17.583292 2349 log.go:172] (0xc00018c320) (3) Data frame handling\nI0403 14:23:17.583332 2349 log.go:172] (0xc000836160) Data frame received for 5\nI0403 14:23:17.583377 2349 log.go:172] (0xc0002d86e0) (5) Data frame handling\nI0403 14:23:17.585073 2349 log.go:172] (0xc000836160) Data frame received for 1\nI0403 14:23:17.585095 2349 log.go:172] (0xc0002d8640) (1) Data frame handling\nI0403 14:23:17.585265 2349 log.go:172] (0xc0002d8640) (1) Data frame sent\nI0403 14:23:17.585426 2349 log.go:172] (0xc000836160) (0xc0002d8640) Stream removed, broadcasting: 1\nI0403 14:23:17.585792 2349 log.go:172] (0xc000836160) Go away received\nI0403 14:23:17.585884 2349 log.go:172] (0xc000836160) (0xc0002d8640) Stream removed, broadcasting: 1\nI0403 14:23:17.585916 2349 log.go:172] (0xc000836160) (0xc00018c320) Stream removed, broadcasting: 3\nI0403 14:23:17.585937 2349 log.go:172] (0xc000836160) (0xc0002d86e0) Stream removed, broadcasting: 5\n" Apr 3 14:23:17.591: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 3 14:23:17.591: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 3 14:23:17.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Apr 3 14:23:17.819: INFO: stderr: "I0403 14:23:17.718188 2367 log.go:172] (0xc000a48370) (0xc000998640) Create stream\nI0403 14:23:17.718236 2367 log.go:172] (0xc000a48370) (0xc000998640) Stream added, broadcasting: 1\nI0403 14:23:17.720557 2367 log.go:172] (0xc000a48370) Reply frame received for 1\nI0403 14:23:17.720599 2367 log.go:172] (0xc000a48370) (0xc000934000) Create stream\nI0403 14:23:17.720610 2367 log.go:172] (0xc000a48370) (0xc000934000) Stream added, broadcasting: 3\nI0403 14:23:17.721750 2367 log.go:172] (0xc000a48370) Reply frame received for 3\nI0403 14:23:17.721814 2367 log.go:172] (0xc000a48370) (0xc0009986e0) Create stream\nI0403 14:23:17.721825 2367 log.go:172] (0xc000a48370) (0xc0009986e0) Stream added, broadcasting: 5\nI0403 14:23:17.722856 2367 log.go:172] (0xc000a48370) Reply frame received for 5\nI0403 14:23:17.780847 2367 log.go:172] (0xc000a48370) Data frame received for 5\nI0403 14:23:17.780875 2367 log.go:172] (0xc0009986e0) (5) Data frame handling\nI0403 14:23:17.780896 2367 log.go:172] (0xc0009986e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0403 14:23:17.810786 2367 log.go:172] (0xc000a48370) Data frame received for 3\nI0403 14:23:17.810830 2367 log.go:172] (0xc000934000) (3) Data frame handling\nI0403 14:23:17.810857 2367 log.go:172] (0xc000934000) (3) Data frame sent\nI0403 14:23:17.810869 2367 log.go:172] (0xc000a48370) Data frame received for 3\nI0403 14:23:17.810877 2367 log.go:172] (0xc000934000) (3) Data frame handling\nI0403 14:23:17.811032 2367 log.go:172] (0xc000a48370) Data frame received for 5\nI0403 14:23:17.811059 2367 log.go:172] (0xc0009986e0) (5) Data frame handling\nI0403 14:23:17.812925 2367 log.go:172] (0xc000a48370) Data frame received for 1\nI0403 14:23:17.812955 2367 log.go:172] (0xc000998640) (1) Data frame handling\nI0403 14:23:17.813002 2367 log.go:172] (0xc000998640) (1) Data frame sent\nI0403 14:23:17.813047 2367 log.go:172] (0xc000a48370) (0xc000998640) Stream removed, broadcasting: 1\nI0403 14:23:17.813075 2367 log.go:172] (0xc000a48370) Go away received\nI0403 14:23:17.813671 2367 log.go:172] (0xc000a48370) (0xc000998640) Stream removed, broadcasting: 1\nI0403 14:23:17.813706 2367 log.go:172] (0xc000a48370) (0xc000934000) Stream removed, broadcasting: 3\nI0403 14:23:17.813719 2367 log.go:172] (0xc000a48370) (0xc0009986e0) Stream removed, broadcasting: 5\n" Apr 3 14:23:17.819: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Apr 3 14:23:17.819: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Apr 3 14:23:17.819: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 14:23:17.823: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 Apr 3 14:23:27.831: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Apr 3 14:23:27.831: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Apr 3 14:23:27.831: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Apr 3 14:23:27.842: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 14:23:27.842: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC }] Apr 3 14:23:27.842: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:56 +0000 UTC }] Apr 3 14:23:27.842: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:56 +0000 UTC }] Apr 3 14:23:27.842: INFO: Apr 3 14:23:27.842: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 3 14:23:28.855: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 14:23:28.855: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC }] Apr 3 14:23:28.855: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:56 +0000 UTC }] Apr 3 14:23:28.855: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:56 +0000 UTC }] Apr 3 14:23:28.855: INFO: Apr 3 14:23:28.855: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 3 14:23:29.862: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 14:23:29.862: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC }] Apr 3 14:23:29.863: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:56 +0000 UTC }] Apr 3 14:23:29.863: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:56 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:56 +0000 UTC }] Apr 3 14:23:29.863: INFO: Apr 3 14:23:29.863: INFO: StatefulSet ss has not reached scale 0, at 3 Apr 3 14:23:30.867: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 14:23:30.867: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC }] Apr 3 14:23:30.867: INFO: Apr 3 14:23:30.867: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 3 14:23:31.872: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 14:23:31.872: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC }] Apr 3 14:23:31.872: INFO: Apr 3 14:23:31.872: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 3 14:23:32.876: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 14:23:32.877: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC }] Apr 3 14:23:32.877: INFO: Apr 3 14:23:32.877: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 3 14:23:33.881: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 14:23:33.881: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC }] Apr 3 14:23:33.881: INFO: Apr 3 14:23:33.881: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 3 14:23:34.905: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 14:23:34.905: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC }] Apr 3 14:23:34.905: INFO: Apr 3 14:23:34.905: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 3 14:23:35.910: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 14:23:35.910: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC }] Apr 3 14:23:35.910: INFO: Apr 3 14:23:35.910: INFO: StatefulSet ss has not reached scale 0, at 1 Apr 3 14:23:36.914: INFO: POD NODE PHASE GRACE CONDITIONS Apr 3 14:23:36.914: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:23:18 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-04-03 14:22:36 +0000 UTC }] Apr 3 14:23:36.914: INFO: Apr 3 14:23:36.914: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3555 Apr 3 14:23:37.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:23:40.256: INFO: rc: 1 Apr 3 14:23:40.256: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc00339ef60 exit status 1 true [0xc001b8a0a8 0xc001b8a0e0 0xc001b8a108] [0xc001b8a0a8 0xc001b8a0e0 0xc001b8a108] [0xc001b8a0d0 0xc001b8a100] [0xba70e0 0xba70e0] 0xc001742780 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Apr 3 14:23:50.257: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:23:50.349: INFO: rc: 1 Apr 3 14:23:50.350: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea20c0 exit status 1 true [0xc000b92070 0xc000b92858 0xc000b92d38] [0xc000b92070 0xc000b92858 0xc000b92d38] [0xc000b926e0 0xc000b92c48] [0xba70e0 0xba70e0] 0xc001f244e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:24:00.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:24:00.449: INFO: rc: 1 Apr 3 14:24:00.449: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00339f020 exit status 1 true [0xc001b8a110 0xc001b8a1a0 0xc001b8a210] [0xc001b8a110 0xc001b8a1a0 0xc001b8a210] [0xc001b8a128 0xc001b8a1f8] [0xba70e0 0xba70e0] 0xc001742d80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:24:10.450: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:24:10.546: INFO: rc: 1 Apr 3 14:24:10.546: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00339f110 exit status 1 true [0xc001b8a218 0xc001b8a230 0xc001b8a298] [0xc001b8a218 0xc001b8a230 0xc001b8a298] [0xc001b8a228 0xc001b8a278] [0xba70e0 0xba70e0] 0xc0017431a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:24:20.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:24:20.641: INFO: rc: 1 Apr 3 14:24:20.641: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea21b0 exit status 1 true [0xc000b92db0 0xc000b93320 0xc000b93458] [0xc000b92db0 0xc000b93320 0xc000b93458] [0xc000b931a0 0xc000b933e0] [0xba70e0 0xba70e0] 0xc001f24d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:24:30.641: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:24:30.753: INFO: rc: 1 Apr 3 14:24:30.753: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c7b0b0 exit status 1 true [0xc000745040 0xc0007450b0 0xc0007450f0] [0xc000745040 0xc0007450b0 0xc0007450f0] [0xc000745070 0xc0007450e0] [0xba70e0 0xba70e0] 0xc002d2a8a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:24:40.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:24:40.846: INFO: rc: 1 Apr 3 14:24:40.846: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001daad50 exit status 1 true [0xc000242260 0xc000242368 0xc0002423c8] [0xc000242260 0xc000242368 0xc0002423c8] [0xc000242338 0xc0002423c0] [0xba70e0 0xba70e0] 0xc002ed8540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:24:50.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:24:50.947: INFO: rc: 1 Apr 3 14:24:50.947: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00339f200 exit status 1 true [0xc001b8a2a8 0xc001b8a300 0xc001b8a350] [0xc001b8a2a8 0xc001b8a300 0xc001b8a350] [0xc001b8a2e8 0xc001b8a340] [0xba70e0 0xba70e0] 0xc001743920 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:25:00.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:25:01.033: INFO: rc: 1 Apr 3 14:25:01.034: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001daae40 exit status 1 true [0xc0002423d0 0xc0002423e8 0xc000242420] [0xc0002423d0 0xc0002423e8 0xc000242420] [0xc0002423e0 0xc000242410] [0xba70e0 0xba70e0] 0xc002ed8a20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:25:11.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:25:11.129: INFO: rc: 1 Apr 3 14:25:11.129: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c7b1d0 exit status 1 true [0xc000745100 0xc000745180 0xc0007451e0] [0xc000745100 0xc000745180 0xc0007451e0] [0xc000745170 0xc0007451a0] [0xba70e0 0xba70e0] 0xc002d2ad20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:25:21.129: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:25:21.227: INFO: rc: 1 Apr 3 14:25:21.227: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea2270 exit status 1 true [0xc000b935e8 0xc000b93878 0xc000b93a18] [0xc000b935e8 0xc000b93878 0xc000b93a18] [0xc000b937d0 0xc000b93a08] [0xba70e0 0xba70e0] 0xc001f25380 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:25:31.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:25:31.342: INFO: rc: 1 Apr 3 14:25:31.342: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00339e090 exit status 1 true [0xc001b8a008 0xc001b8a030 0xc001b8a088] [0xc001b8a008 0xc001b8a030 0xc001b8a088] [0xc001b8a020 0xc001b8a080] [0xba70e0 0xba70e0] 0xc0017422a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:25:41.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:25:41.436: INFO: rc: 1 Apr 3 14:25:41.436: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c7a090 exit status 1 true [0xc000242040 0xc0002420a8 0xc000242108] [0xc000242040 0xc0002420a8 0xc000242108] [0xc000242098 0xc0002420e0] [0xba70e0 0xba70e0] 0xc002ed8240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:25:51.436: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:25:51.541: INFO: rc: 1 Apr 3 14:25:51.541: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00339e150 exit status 1 true [0xc001b8a090 0xc001b8a0b8 0xc001b8a0f0] [0xc001b8a090 0xc001b8a0b8 0xc001b8a0f0] [0xc001b8a0a8 0xc001b8a0e0] [0xba70e0 0xba70e0] 0xc0017427e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:26:01.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:26:01.641: INFO: rc: 1 Apr 3 14:26:01.642: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea20f0 exit status 1 true [0xc000744df0 0xc000744e48 0xc000744f28] [0xc000744df0 0xc000744e48 0xc000744f28] [0xc000744e10 0xc000744f10] [0xba70e0 0xba70e0] 0xc002d2a480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:26:11.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:26:11.739: INFO: rc: 1 Apr 3 14:26:11.739: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c7a180 exit status 1 true [0xc000242138 0xc000242180 0xc0002421f0] [0xc000242138 0xc000242180 0xc0002421f0] [0xc000242170 0xc0002421c0] [0xba70e0 0xba70e0] 0xc002ed85a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:26:21.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:26:21.839: INFO: rc: 1 Apr 3 14:26:21.839: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea2210 exit status 1 true [0xc000744f48 0xc000744f60 0xc000745020] [0xc000744f48 0xc000744f60 0xc000745020] [0xc000744f58 0xc000745010] [0xba70e0 0xba70e0] 0xc002d2a840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:26:31.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:26:31.928: INFO: rc: 1 Apr 3 14:26:31.928: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea2360 exit status 1 true [0xc000745030 0xc000745070 0xc0007450e0] [0xc000745030 0xc000745070 0xc0007450e0] [0xc000745050 0xc0007450d0] [0xba70e0 0xba70e0] 0xc002d2acc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:26:41.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:26:42.116: INFO: rc: 1 Apr 3 14:26:42.116: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea2600 exit status 1 true [0xc0007450f0 0xc000745170 0xc0007451a0] [0xc0007450f0 0xc000745170 0xc0007451a0] [0xc000745138 0xc000745190] [0xba70e0 0xba70e0] 0xc002d2afc0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:26:52.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:26:52.219: INFO: rc: 1 Apr 3 14:26:52.220: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001daa0c0 exit status 1 true [0xc000b92070 0xc000b92858 0xc000b92d38] [0xc000b92070 0xc000b92858 0xc000b92d38] [0xc000b926e0 0xc000b92c48] [0xba70e0 0xba70e0] 0xc001f244e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:27:02.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:27:02.313: INFO: rc: 1 Apr 3 14:27:02.313: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c7a270 exit status 1 true [0xc000242210 0xc000242338 0xc0002423c0] [0xc000242210 0xc000242338 0xc0002423c0] [0xc000242320 0xc0002423b8] [0xba70e0 0xba70e0] 0xc002ed8a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:27:12.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:27:12.409: INFO: rc: 1 Apr 3 14:27:12.409: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea2750 exit status 1 true [0xc0007451e0 0xc000745258 0xc0007452c8] [0xc0007451e0 0xc000745258 0xc0007452c8] [0xc000745248 0xc0007452a0] [0xba70e0 0xba70e0] 0xc002d2b560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:27:22.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:27:22.509: INFO: rc: 1 Apr 3 14:27:22.509: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001daa1b0 exit status 1 true [0xc000b92db0 0xc000b93320 0xc000b93458] [0xc000b92db0 0xc000b93320 0xc000b93458] [0xc000b931a0 0xc000b933e0] [0xba70e0 0xba70e0] 0xc001f24d20 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:27:32.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:27:32.596: INFO: rc: 1 Apr 3 14:27:32.596: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001ea2090 exit status 1 true [0xc000744e00 0xc000744ec8 0xc000744f48] [0xc000744e00 0xc000744ec8 0xc000744f48] [0xc000744e48 0xc000744f28] [0xba70e0 0xba70e0] 0xc002d2a480 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:27:42.596: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:27:42.689: INFO: rc: 1 Apr 3 14:27:42.689: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00339e0f0 exit status 1 true [0xc001b8a000 0xc001b8a020 0xc001b8a080] [0xc001b8a000 0xc001b8a020 0xc001b8a080] [0xc001b8a010 0xc001b8a048] [0xba70e0 0xba70e0] 0xc0017422a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:27:52.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:27:52.786: INFO: rc: 1 Apr 3 14:27:52.786: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc001daa0f0 exit status 1 true [0xc000b92070 0xc000b92858 0xc000b92d38] [0xc000b92070 0xc000b92858 0xc000b92d38] [0xc000b926e0 0xc000b92c48] [0xba70e0 0xba70e0] 0xc001f244e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:28:02.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:28:02.893: INFO: rc: 1 Apr 3 14:28:02.894: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc002c7a0c0 exit status 1 true [0xc000242040 0xc0002420a8 0xc000242108] [0xc000242040 0xc0002420a8 0xc000242108] [0xc000242098 0xc0002420e0] [0xba70e0 0xba70e0] 0xc002ed8240 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:28:12.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:28:13.005: INFO: rc: 1 Apr 3 14:28:13.005: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00339e210 exit status 1 true [0xc001b8a088 0xc001b8a0a8 0xc001b8a0e0] [0xc001b8a088 0xc001b8a0a8 0xc001b8a0e0] [0xc001b8a098 0xc001b8a0d0] [0xba70e0 0xba70e0] 0xc0017427e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:28:23.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:28:23.106: INFO: rc: 1 Apr 3 14:28:23.106: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00339e2d0 exit status 1 true [0xc001b8a0f0 0xc001b8a110 0xc001b8a1a0] [0xc001b8a0f0 0xc001b8a110 0xc001b8a1a0] [0xc001b8a108 0xc001b8a128] [0xba70e0 0xba70e0] 0xc001742e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:28:33.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:28:33.206: INFO: rc: 1 Apr 3 14:28:33.206: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-0" not found [] 0xc00339e390 exit status 1 true [0xc001b8a1d8 0xc001b8a218 0xc001b8a230] [0xc001b8a1d8 0xc001b8a218 0xc001b8a230] [0xc001b8a210 0xc001b8a228] [0xba70e0 0xba70e0] 0xc001743320 }: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Apr 3 14:28:43.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3555 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Apr 3 14:28:43.311: INFO: rc: 1 Apr 3 14:28:43.311: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: Apr 3 14:28:43.311: INFO: Scaling statefulset ss to 0 Apr 3 14:28:43.320: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Apr 3 14:28:43.322: INFO: Deleting all statefulset in ns statefulset-3555 Apr 3 14:28:43.325: INFO: Scaling statefulset ss to 0 Apr 3 14:28:43.332: INFO: Waiting for statefulset status.replicas updated to 0 Apr 3 14:28:43.335: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:28:43.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3555" for this suite. Apr 3 14:28:49.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:28:49.469: INFO: namespace statefulset-3555 deletion completed in 6.087715844s • [SLOW TEST:373.559 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:28:49.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-v8qr STEP: Creating a pod to test atomic-volume-subpath Apr 3 14:28:49.581: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-v8qr" in namespace "subpath-7006" to be "success or failure" Apr 3 14:28:49.595: INFO: Pod "pod-subpath-test-projected-v8qr": Phase="Pending", Reason="", readiness=false. Elapsed: 14.317687ms Apr 3 14:28:51.599: INFO: Pod "pod-subpath-test-projected-v8qr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017983366s Apr 3 14:28:53.602: INFO: Pod "pod-subpath-test-projected-v8qr": Phase="Running", Reason="", readiness=true. Elapsed: 4.021634765s Apr 3 14:28:55.606: INFO: Pod "pod-subpath-test-projected-v8qr": Phase="Running", Reason="", readiness=true. Elapsed: 6.025538724s Apr 3 14:28:57.614: INFO: Pod "pod-subpath-test-projected-v8qr": Phase="Running", Reason="", readiness=true. Elapsed: 8.033598924s Apr 3 14:28:59.618: INFO: Pod "pod-subpath-test-projected-v8qr": Phase="Running", Reason="", readiness=true. Elapsed: 10.037156583s Apr 3 14:29:01.622: INFO: Pod "pod-subpath-test-projected-v8qr": Phase="Running", Reason="", readiness=true. Elapsed: 12.040879205s Apr 3 14:29:03.625: INFO: Pod "pod-subpath-test-projected-v8qr": Phase="Running", Reason="", readiness=true. Elapsed: 14.044494206s Apr 3 14:29:05.639: INFO: Pod "pod-subpath-test-projected-v8qr": Phase="Running", Reason="", readiness=true. Elapsed: 16.058082801s Apr 3 14:29:07.644: INFO: Pod "pod-subpath-test-projected-v8qr": Phase="Running", Reason="", readiness=true. Elapsed: 18.063424028s Apr 3 14:29:09.650: INFO: Pod "pod-subpath-test-projected-v8qr": Phase="Running", Reason="", readiness=true. Elapsed: 20.069241182s Apr 3 14:29:11.654: INFO: Pod "pod-subpath-test-projected-v8qr": Phase="Running", Reason="", readiness=true. Elapsed: 22.073336536s Apr 3 14:29:13.659: INFO: Pod "pod-subpath-test-projected-v8qr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.077972881s STEP: Saw pod success Apr 3 14:29:13.659: INFO: Pod "pod-subpath-test-projected-v8qr" satisfied condition "success or failure" Apr 3 14:29:13.662: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-v8qr container test-container-subpath-projected-v8qr: STEP: delete the pod Apr 3 14:29:13.699: INFO: Waiting for pod pod-subpath-test-projected-v8qr to disappear Apr 3 14:29:13.704: INFO: Pod pod-subpath-test-projected-v8qr no longer exists STEP: Deleting pod pod-subpath-test-projected-v8qr Apr 3 14:29:13.704: INFO: Deleting pod "pod-subpath-test-projected-v8qr" in namespace "subpath-7006" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:29:13.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-7006" for this suite. Apr 3 14:29:19.735: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:29:19.812: INFO: namespace subpath-7006 deletion completed in 6.095054123s • [SLOW TEST:30.343 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:29:19.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 14:29:19.920: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Apr 3 14:29:19.932: INFO: Number of nodes with available pods: 0 Apr 3 14:29:19.932: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Apr 3 14:29:19.990: INFO: Number of nodes with available pods: 0 Apr 3 14:29:19.990: INFO: Node iruya-worker is running more than one daemon pod Apr 3 14:29:21.018: INFO: Number of nodes with available pods: 0 Apr 3 14:29:21.018: INFO: Node iruya-worker is running more than one daemon pod Apr 3 14:29:21.994: INFO: Number of nodes with available pods: 0 Apr 3 14:29:21.994: INFO: Node iruya-worker is running more than one daemon pod Apr 3 14:29:22.995: INFO: Number of nodes with available pods: 1 Apr 3 14:29:22.995: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Apr 3 14:29:23.053: INFO: Number of nodes with available pods: 1 Apr 3 14:29:23.053: INFO: Number of running nodes: 0, number of available pods: 1 Apr 3 14:29:24.057: INFO: Number of nodes with available pods: 0 Apr 3 14:29:24.057: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Apr 3 14:29:24.065: INFO: Number of nodes with available pods: 0 Apr 3 14:29:24.065: INFO: Node iruya-worker is running more than one daemon pod Apr 3 14:29:25.069: INFO: Number of nodes with available pods: 0 Apr 3 14:29:25.069: INFO: Node iruya-worker is running more than one daemon pod Apr 3 14:29:26.069: INFO: Number of nodes with available pods: 0 Apr 3 14:29:26.069: INFO: Node iruya-worker is running more than one daemon pod Apr 3 14:29:27.069: INFO: Number of nodes with available pods: 0 Apr 3 14:29:27.069: INFO: Node iruya-worker is running more than one daemon pod Apr 3 14:29:28.087: INFO: Number of nodes with available pods: 0 Apr 3 14:29:28.088: INFO: Node iruya-worker is running more than one daemon pod Apr 3 14:29:29.069: INFO: Number of nodes with available pods: 0 Apr 3 14:29:29.069: INFO: Node iruya-worker is running more than one daemon pod Apr 3 14:29:30.070: INFO: Number of nodes with available pods: 0 Apr 3 14:29:30.070: INFO: Node iruya-worker is running more than one daemon pod Apr 3 14:29:31.070: INFO: Number of nodes with available pods: 0 Apr 3 14:29:31.070: INFO: Node iruya-worker is running more than one daemon pod Apr 3 14:29:32.070: INFO: Number of nodes with available pods: 0 Apr 3 14:29:32.070: INFO: Node iruya-worker is running more than one daemon pod Apr 3 14:29:33.072: INFO: Number of nodes with available pods: 0 Apr 3 14:29:33.072: INFO: Node iruya-worker is running more than one daemon pod Apr 3 14:29:34.070: INFO: Number of nodes with available pods: 0 Apr 3 14:29:34.070: INFO: Node iruya-worker is running more than one daemon pod Apr 3 14:29:35.070: INFO: Number of nodes with available pods: 1 Apr 3 14:29:35.070: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1814, will wait for the garbage collector to delete the pods Apr 3 14:29:35.135: INFO: Deleting DaemonSet.extensions daemon-set took: 6.183941ms Apr 3 14:29:35.435: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.266248ms Apr 3 14:29:38.939: INFO: Number of nodes with available pods: 0 Apr 3 14:29:38.939: INFO: Number of running nodes: 0, number of available pods: 0 Apr 3 14:29:38.942: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1814/daemonsets","resourceVersion":"3410523"},"items":null} Apr 3 14:29:38.944: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1814/pods","resourceVersion":"3410523"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:29:38.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1814" for this suite. Apr 3 14:29:45.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:29:45.126: INFO: namespace daemonsets-1814 deletion completed in 6.152972s • [SLOW TEST:25.314 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:29:45.126: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller Apr 3 14:29:45.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8646' Apr 3 14:29:45.532: INFO: stderr: "" Apr 3 14:29:45.532: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 3 14:29:45.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8646' Apr 3 14:29:45.658: INFO: stderr: "" Apr 3 14:29:45.658: INFO: stdout: "update-demo-nautilus-44bpk update-demo-nautilus-mm4fm " Apr 3 14:29:45.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-44bpk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8646' Apr 3 14:29:45.772: INFO: stderr: "" Apr 3 14:29:45.772: INFO: stdout: "" Apr 3 14:29:45.772: INFO: update-demo-nautilus-44bpk is created but not running Apr 3 14:29:50.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8646' Apr 3 14:29:50.869: INFO: stderr: "" Apr 3 14:29:50.869: INFO: stdout: "update-demo-nautilus-44bpk update-demo-nautilus-mm4fm " Apr 3 14:29:50.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-44bpk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8646' Apr 3 14:29:50.972: INFO: stderr: "" Apr 3 14:29:50.972: INFO: stdout: "true" Apr 3 14:29:50.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-44bpk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8646' Apr 3 14:29:51.071: INFO: stderr: "" Apr 3 14:29:51.071: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 14:29:51.071: INFO: validating pod update-demo-nautilus-44bpk Apr 3 14:29:51.075: INFO: got data: { "image": "nautilus.jpg" } Apr 3 14:29:51.075: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 14:29:51.075: INFO: update-demo-nautilus-44bpk is verified up and running Apr 3 14:29:51.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm4fm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8646' Apr 3 14:29:51.166: INFO: stderr: "" Apr 3 14:29:51.166: INFO: stdout: "true" Apr 3 14:29:51.166: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mm4fm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8646' Apr 3 14:29:51.255: INFO: stderr: "" Apr 3 14:29:51.255: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Apr 3 14:29:51.255: INFO: validating pod update-demo-nautilus-mm4fm Apr 3 14:29:51.260: INFO: got data: { "image": "nautilus.jpg" } Apr 3 14:29:51.260: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Apr 3 14:29:51.260: INFO: update-demo-nautilus-mm4fm is verified up and running STEP: rolling-update to new replication controller Apr 3 14:29:51.263: INFO: scanned /root for discovery docs: Apr 3 14:29:51.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-8646' Apr 3 14:30:13.825: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 3 14:30:13.825: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Apr 3 14:30:13.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8646' Apr 3 14:30:13.933: INFO: stderr: "" Apr 3 14:30:13.933: INFO: stdout: "update-demo-kitten-57jvd update-demo-kitten-hp9bd " Apr 3 14:30:13.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-57jvd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8646' Apr 3 14:30:14.036: INFO: stderr: "" Apr 3 14:30:14.036: INFO: stdout: "true" Apr 3 14:30:14.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-57jvd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8646' Apr 3 14:30:14.136: INFO: stderr: "" Apr 3 14:30:14.136: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 3 14:30:14.136: INFO: validating pod update-demo-kitten-57jvd Apr 3 14:30:14.141: INFO: got data: { "image": "kitten.jpg" } Apr 3 14:30:14.141: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 3 14:30:14.141: INFO: update-demo-kitten-57jvd is verified up and running Apr 3 14:30:14.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hp9bd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8646' Apr 3 14:30:14.234: INFO: stderr: "" Apr 3 14:30:14.234: INFO: stdout: "true" Apr 3 14:30:14.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-hp9bd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8646' Apr 3 14:30:14.323: INFO: stderr: "" Apr 3 14:30:14.323: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Apr 3 14:30:14.323: INFO: validating pod update-demo-kitten-hp9bd Apr 3 14:30:14.326: INFO: got data: { "image": "kitten.jpg" } Apr 3 14:30:14.326: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Apr 3 14:30:14.326: INFO: update-demo-kitten-hp9bd is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:30:14.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8646" for this suite. Apr 3 14:30:36.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:30:36.428: INFO: namespace kubectl-8646 deletion completed in 22.09867829s • [SLOW TEST:51.302 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:30:36.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:30:40.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2965" for this suite. Apr 3 14:31:26.588: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:31:26.664: INFO: namespace kubelet-test-2965 deletion completed in 46.094979064s • [SLOW TEST:50.235 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:31:26.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Apr 3 14:31:26.714: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:31:30.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9364" for this suite. Apr 3 14:32:08.770: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:32:08.850: INFO: namespace pods-9364 deletion completed in 38.094567323s • [SLOW TEST:42.186 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:32:08.851: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0403 14:32:49.202887 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 3 14:32:49.202: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:32:49.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5615" for this suite. Apr 3 14:32:57.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:32:57.293: INFO: namespace gc-5615 deletion completed in 8.086598243s • [SLOW TEST:48.443 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:32:57.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 14:32:57.484: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ec40390f-a3aa-431f-9217-68b9a908a19c" in namespace "downward-api-2478" to be "success or failure" Apr 3 14:32:57.530: INFO: Pod "downwardapi-volume-ec40390f-a3aa-431f-9217-68b9a908a19c": Phase="Pending", Reason="", readiness=false. Elapsed: 45.222666ms Apr 3 14:32:59.534: INFO: Pod "downwardapi-volume-ec40390f-a3aa-431f-9217-68b9a908a19c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049695494s Apr 3 14:33:01.539: INFO: Pod "downwardapi-volume-ec40390f-a3aa-431f-9217-68b9a908a19c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.054592054s STEP: Saw pod success Apr 3 14:33:01.539: INFO: Pod "downwardapi-volume-ec40390f-a3aa-431f-9217-68b9a908a19c" satisfied condition "success or failure" Apr 3 14:33:01.543: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-ec40390f-a3aa-431f-9217-68b9a908a19c container client-container: STEP: delete the pod Apr 3 14:33:01.713: INFO: Waiting for pod downwardapi-volume-ec40390f-a3aa-431f-9217-68b9a908a19c to disappear Apr 3 14:33:01.757: INFO: Pod downwardapi-volume-ec40390f-a3aa-431f-9217-68b9a908a19c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:33:01.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2478" for this suite. Apr 3 14:33:07.796: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:33:07.870: INFO: namespace downward-api-2478 deletion completed in 6.10885043s • [SLOW TEST:10.576 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:33:07.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info Apr 3 14:33:07.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Apr 3 14:33:07.992: INFO: stderr: "" Apr 3 14:33:07.992: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:33:07.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7984" for this suite. Apr 3 14:33:14.006: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:33:14.077: INFO: namespace kubectl-7984 deletion completed in 6.081381324s • [SLOW TEST:6.207 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:33:14.077: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-512a6be2-93de-4ae2-a27f-16a87f1ad680 STEP: Creating a pod to test consume secrets Apr 3 14:33:14.178: INFO: Waiting up to 5m0s for pod "pod-secrets-b3792051-44b6-4654-b273-e454bd4baaab" in namespace "secrets-13" to be "success or failure" Apr 3 14:33:14.182: INFO: Pod "pod-secrets-b3792051-44b6-4654-b273-e454bd4baaab": Phase="Pending", Reason="", readiness=false. Elapsed: 3.71624ms Apr 3 14:33:16.186: INFO: Pod "pod-secrets-b3792051-44b6-4654-b273-e454bd4baaab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007186696s Apr 3 14:33:18.190: INFO: Pod "pod-secrets-b3792051-44b6-4654-b273-e454bd4baaab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011648432s STEP: Saw pod success Apr 3 14:33:18.190: INFO: Pod "pod-secrets-b3792051-44b6-4654-b273-e454bd4baaab" satisfied condition "success or failure" Apr 3 14:33:18.194: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-b3792051-44b6-4654-b273-e454bd4baaab container secret-volume-test: STEP: delete the pod Apr 3 14:33:18.238: INFO: Waiting for pod pod-secrets-b3792051-44b6-4654-b273-e454bd4baaab to disappear Apr 3 14:33:18.242: INFO: Pod pod-secrets-b3792051-44b6-4654-b273-e454bd4baaab no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:33:18.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-13" for this suite. Apr 3 14:33:24.259: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:33:24.363: INFO: namespace secrets-13 deletion completed in 6.117663507s • [SLOW TEST:10.286 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:33:24.363: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-74922810-4837-4303-98c5-00256caa56ce STEP: Creating configMap with name cm-test-opt-upd-cac6f219-37b9-45cf-85e8-726c656f3256 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-74922810-4837-4303-98c5-00256caa56ce STEP: Updating configmap cm-test-opt-upd-cac6f219-37b9-45cf-85e8-726c656f3256 STEP: Creating configMap with name cm-test-opt-create-10a00dda-28e4-464a-9459-bf0b78b9b8fe STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:34:46.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6922" for this suite. Apr 3 14:35:08.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:35:08.995: INFO: namespace projected-6922 deletion completed in 22.112823961s • [SLOW TEST:104.632 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:35:08.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs Apr 3 14:35:09.075: INFO: Waiting up to 5m0s for pod "pod-bdee74a7-6c0d-4abe-99fd-6ddfeeb11cd6" in namespace "emptydir-3244" to be "success or failure" Apr 3 14:35:09.079: INFO: Pod "pod-bdee74a7-6c0d-4abe-99fd-6ddfeeb11cd6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.352011ms Apr 3 14:35:11.083: INFO: Pod "pod-bdee74a7-6c0d-4abe-99fd-6ddfeeb11cd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007524313s Apr 3 14:35:13.094: INFO: Pod "pod-bdee74a7-6c0d-4abe-99fd-6ddfeeb11cd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018429852s STEP: Saw pod success Apr 3 14:35:13.094: INFO: Pod "pod-bdee74a7-6c0d-4abe-99fd-6ddfeeb11cd6" satisfied condition "success or failure" Apr 3 14:35:13.098: INFO: Trying to get logs from node iruya-worker2 pod pod-bdee74a7-6c0d-4abe-99fd-6ddfeeb11cd6 container test-container: STEP: delete the pod Apr 3 14:35:13.136: INFO: Waiting for pod pod-bdee74a7-6c0d-4abe-99fd-6ddfeeb11cd6 to disappear Apr 3 14:35:13.149: INFO: Pod pod-bdee74a7-6c0d-4abe-99fd-6ddfeeb11cd6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:35:13.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3244" for this suite. Apr 3 14:35:19.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:35:19.244: INFO: namespace emptydir-3244 deletion completed in 6.091941684s • [SLOW TEST:10.248 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:35:19.244: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Apr 3 14:35:19.301: INFO: Waiting up to 5m0s for pod "downwardapi-volume-daf72e50-7f58-4054-b16b-bc4c64b9c426" in namespace "downward-api-8843" to be "success or failure" Apr 3 14:35:19.317: INFO: Pod "downwardapi-volume-daf72e50-7f58-4054-b16b-bc4c64b9c426": Phase="Pending", Reason="", readiness=false. Elapsed: 15.894907ms Apr 3 14:35:21.321: INFO: Pod "downwardapi-volume-daf72e50-7f58-4054-b16b-bc4c64b9c426": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020077258s Apr 3 14:35:23.326: INFO: Pod "downwardapi-volume-daf72e50-7f58-4054-b16b-bc4c64b9c426": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024276964s STEP: Saw pod success Apr 3 14:35:23.326: INFO: Pod "downwardapi-volume-daf72e50-7f58-4054-b16b-bc4c64b9c426" satisfied condition "success or failure" Apr 3 14:35:23.328: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-daf72e50-7f58-4054-b16b-bc4c64b9c426 container client-container: STEP: delete the pod Apr 3 14:35:23.467: INFO: Waiting for pod downwardapi-volume-daf72e50-7f58-4054-b16b-bc4c64b9c426 to disappear Apr 3 14:35:23.506: INFO: Pod downwardapi-volume-daf72e50-7f58-4054-b16b-bc4c64b9c426 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:35:23.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8843" for this suite. Apr 3 14:35:29.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:35:29.615: INFO: namespace downward-api-8843 deletion completed in 6.104241957s • [SLOW TEST:10.370 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:35:29.615: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0403 14:35:39.695708 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 3 14:35:39.695: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:35:39.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4098" for this suite. Apr 3 14:35:45.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:35:45.790: INFO: namespace gc-4098 deletion completed in 6.090205314s • [SLOW TEST:16.175 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:35:45.790: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-6325 STEP: creating a selector STEP: Creating the service pods in kubernetes Apr 3 14:35:45.863: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Apr 3 14:36:08.018: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.217:8080/dial?request=hostName&protocol=http&host=10.244.2.216&port=8080&tries=1'] Namespace:pod-network-test-6325 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 14:36:08.018: INFO: >>> kubeConfig: /root/.kube/config I0403 14:36:08.050995 6 log.go:172] (0xc000d20370) (0xc001fb61e0) Create stream I0403 14:36:08.051034 6 log.go:172] (0xc000d20370) (0xc001fb61e0) Stream added, broadcasting: 1 I0403 14:36:08.053322 6 log.go:172] (0xc000d20370) Reply frame received for 1 I0403 14:36:08.053371 6 log.go:172] (0xc000d20370) (0xc00339a8c0) Create stream I0403 14:36:08.053384 6 log.go:172] (0xc000d20370) (0xc00339a8c0) Stream added, broadcasting: 3 I0403 14:36:08.054273 6 log.go:172] (0xc000d20370) Reply frame received for 3 I0403 14:36:08.054358 6 log.go:172] (0xc000d20370) (0xc001fb6280) Create stream I0403 14:36:08.054412 6 log.go:172] (0xc000d20370) (0xc001fb6280) Stream added, broadcasting: 5 I0403 14:36:08.055589 6 log.go:172] (0xc000d20370) Reply frame received for 5 I0403 14:36:08.130086 6 log.go:172] (0xc000d20370) Data frame received for 3 I0403 14:36:08.130127 6 log.go:172] (0xc00339a8c0) (3) Data frame handling I0403 14:36:08.130160 6 log.go:172] (0xc00339a8c0) (3) Data frame sent I0403 14:36:08.130587 6 log.go:172] (0xc000d20370) Data frame received for 3 I0403 14:36:08.130645 6 log.go:172] (0xc00339a8c0) (3) Data frame handling I0403 14:36:08.130686 6 log.go:172] (0xc000d20370) Data frame received for 5 I0403 14:36:08.130715 6 log.go:172] (0xc001fb6280) (5) Data frame handling I0403 14:36:08.132546 6 log.go:172] (0xc000d20370) Data frame received for 1 I0403 14:36:08.132577 6 log.go:172] (0xc001fb61e0) (1) Data frame handling I0403 14:36:08.132605 6 log.go:172] (0xc001fb61e0) (1) Data frame sent I0403 14:36:08.132630 6 log.go:172] (0xc000d20370) (0xc001fb61e0) Stream removed, broadcasting: 1 I0403 14:36:08.132657 6 log.go:172] (0xc000d20370) Go away received I0403 14:36:08.132793 6 log.go:172] (0xc000d20370) (0xc001fb61e0) Stream removed, broadcasting: 1 I0403 14:36:08.132826 6 log.go:172] (0xc000d20370) (0xc00339a8c0) Stream removed, broadcasting: 3 I0403 14:36:08.132841 6 log.go:172] (0xc000d20370) (0xc001fb6280) Stream removed, broadcasting: 5 Apr 3 14:36:08.132: INFO: Waiting for endpoints: map[] Apr 3 14:36:08.135: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.217:8080/dial?request=hostName&protocol=http&host=10.244.1.118&port=8080&tries=1'] Namespace:pod-network-test-6325 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Apr 3 14:36:08.135: INFO: >>> kubeConfig: /root/.kube/config I0403 14:36:08.163944 6 log.go:172] (0xc000d21080) (0xc001fb65a0) Create stream I0403 14:36:08.163972 6 log.go:172] (0xc000d21080) (0xc001fb65a0) Stream added, broadcasting: 1 I0403 14:36:08.166251 6 log.go:172] (0xc000d21080) Reply frame received for 1 I0403 14:36:08.166295 6 log.go:172] (0xc000d21080) (0xc001dd6c80) Create stream I0403 14:36:08.166310 6 log.go:172] (0xc000d21080) (0xc001dd6c80) Stream added, broadcasting: 3 I0403 14:36:08.167286 6 log.go:172] (0xc000d21080) Reply frame received for 3 I0403 14:36:08.167321 6 log.go:172] (0xc000d21080) (0xc001fb6640) Create stream I0403 14:36:08.167335 6 log.go:172] (0xc000d21080) (0xc001fb6640) Stream added, broadcasting: 5 I0403 14:36:08.168196 6 log.go:172] (0xc000d21080) Reply frame received for 5 I0403 14:36:08.249444 6 log.go:172] (0xc000d21080) Data frame received for 3 I0403 14:36:08.249466 6 log.go:172] (0xc001dd6c80) (3) Data frame handling I0403 14:36:08.249473 6 log.go:172] (0xc001dd6c80) (3) Data frame sent I0403 14:36:08.250083 6 log.go:172] (0xc000d21080) Data frame received for 3 I0403 14:36:08.250126 6 log.go:172] (0xc001dd6c80) (3) Data frame handling I0403 14:36:08.250160 6 log.go:172] (0xc000d21080) Data frame received for 5 I0403 14:36:08.250179 6 log.go:172] (0xc001fb6640) (5) Data frame handling I0403 14:36:08.251876 6 log.go:172] (0xc000d21080) Data frame received for 1 I0403 14:36:08.251912 6 log.go:172] (0xc001fb65a0) (1) Data frame handling I0403 14:36:08.251950 6 log.go:172] (0xc001fb65a0) (1) Data frame sent I0403 14:36:08.251976 6 log.go:172] (0xc000d21080) (0xc001fb65a0) Stream removed, broadcasting: 1 I0403 14:36:08.252034 6 log.go:172] (0xc000d21080) Go away received I0403 14:36:08.252123 6 log.go:172] (0xc000d21080) (0xc001fb65a0) Stream removed, broadcasting: 1 I0403 14:36:08.252154 6 log.go:172] (0xc000d21080) (0xc001dd6c80) Stream removed, broadcasting: 3 I0403 14:36:08.252171 6 log.go:172] (0xc000d21080) (0xc001fb6640) Stream removed, broadcasting: 5 Apr 3 14:36:08.252: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:36:08.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6325" for this suite. Apr 3 14:36:30.271: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:36:30.369: INFO: namespace pod-network-test-6325 deletion completed in 22.112692087s • [SLOW TEST:44.579 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:36:30.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications Apr 3 14:36:30.462: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4775,SelfLink:/api/v1/namespaces/watch-4775/configmaps/e2e-watch-test-watch-closed,UID:c7af3a65-e05e-4adc-adef-400e82c1cb5b,ResourceVersion:3411983,Generation:0,CreationTimestamp:2020-04-03 14:36:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 3 14:36:30.462: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4775,SelfLink:/api/v1/namespaces/watch-4775/configmaps/e2e-watch-test-watch-closed,UID:c7af3a65-e05e-4adc-adef-400e82c1cb5b,ResourceVersion:3411984,Generation:0,CreationTimestamp:2020-04-03 14:36:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed Apr 3 14:36:30.500: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4775,SelfLink:/api/v1/namespaces/watch-4775/configmaps/e2e-watch-test-watch-closed,UID:c7af3a65-e05e-4adc-adef-400e82c1cb5b,ResourceVersion:3411985,Generation:0,CreationTimestamp:2020-04-03 14:36:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 3 14:36:30.500: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-4775,SelfLink:/api/v1/namespaces/watch-4775/configmaps/e2e-watch-test-watch-closed,UID:c7af3a65-e05e-4adc-adef-400e82c1cb5b,ResourceVersion:3411986,Generation:0,CreationTimestamp:2020-04-03 14:36:30 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:36:30.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4775" for this suite. Apr 3 14:36:36.515: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:36:36.595: INFO: namespace watch-4775 deletion completed in 6.090443906s • [SLOW TEST:6.226 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:36:36.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-gvrrk in namespace proxy-7770 I0403 14:36:36.706189 6 runners.go:180] Created replication controller with name: proxy-service-gvrrk, namespace: proxy-7770, replica count: 1 I0403 14:36:37.756629 6 runners.go:180] proxy-service-gvrrk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0403 14:36:38.756836 6 runners.go:180] proxy-service-gvrrk Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0403 14:36:39.757067 6 runners.go:180] proxy-service-gvrrk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0403 14:36:40.757267 6 runners.go:180] proxy-service-gvrrk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0403 14:36:41.757496 6 runners.go:180] proxy-service-gvrrk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0403 14:36:42.757763 6 runners.go:180] proxy-service-gvrrk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0403 14:36:43.757942 6 runners.go:180] proxy-service-gvrrk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0403 14:36:44.758140 6 runners.go:180] proxy-service-gvrrk Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0403 14:36:45.758299 6 runners.go:180] proxy-service-gvrrk Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Apr 3 14:36:45.761: INFO: setup took 9.113800409s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts Apr 3 14:36:45.766: INFO: (0) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 5.071677ms) Apr 3 14:36:45.766: INFO: (0) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 5.205682ms) Apr 3 14:36:45.766: INFO: (0) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:1080/proxy/: ... (200; 5.444761ms) Apr 3 14:36:45.771: INFO: (0) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 10.425227ms) Apr 3 14:36:45.771: INFO: (0) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname1/proxy/: foo (200; 10.375203ms) Apr 3 14:36:45.771: INFO: (0) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname2/proxy/: bar (200; 10.430783ms) Apr 3 14:36:45.771: INFO: (0) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname1/proxy/: foo (200; 10.31703ms) Apr 3 14:36:45.771: INFO: (0) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname2/proxy/: bar (200; 10.499736ms) Apr 3 14:36:45.771: INFO: (0) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:1080/proxy/: test<... (200; 10.450338ms) Apr 3 14:36:45.772: INFO: (0) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 10.779919ms) Apr 3 14:36:45.772: INFO: (0) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 10.843297ms) Apr 3 14:36:45.776: INFO: (0) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname2/proxy/: tls qux (200; 15.508316ms) Apr 3 14:36:45.777: INFO: (0) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 16.343898ms) Apr 3 14:36:45.783: INFO: (0) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 21.796607ms) Apr 3 14:36:45.783: INFO: (0) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: ... (200; 4.464757ms) Apr 3 14:36:45.788: INFO: (1) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 4.535412ms) Apr 3 14:36:45.788: INFO: (1) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 4.614684ms) Apr 3 14:36:45.788: INFO: (1) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 4.624748ms) Apr 3 14:36:45.788: INFO: (1) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 4.758431ms) Apr 3 14:36:45.788: INFO: (1) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 4.903476ms) Apr 3 14:36:45.788: INFO: (1) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: test<... (200; 5.254435ms) Apr 3 14:36:45.790: INFO: (1) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname1/proxy/: tls baz (200; 6.297826ms) Apr 3 14:36:45.790: INFO: (1) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname2/proxy/: tls qux (200; 6.724481ms) Apr 3 14:36:45.790: INFO: (1) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname1/proxy/: foo (200; 6.743727ms) Apr 3 14:36:45.790: INFO: (1) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname2/proxy/: bar (200; 6.82655ms) Apr 3 14:36:45.790: INFO: (1) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname1/proxy/: foo (200; 6.857791ms) Apr 3 14:36:45.790: INFO: (1) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname2/proxy/: bar (200; 6.75801ms) Apr 3 14:36:45.794: INFO: (2) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 3.526993ms) Apr 3 14:36:45.794: INFO: (2) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 3.771196ms) Apr 3 14:36:45.795: INFO: (2) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:1080/proxy/: ... (200; 3.883161ms) Apr 3 14:36:45.795: INFO: (2) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: test<... (200; 4.762669ms) Apr 3 14:36:45.796: INFO: (2) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 5.498434ms) Apr 3 14:36:45.796: INFO: (2) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname2/proxy/: tls qux (200; 5.481526ms) Apr 3 14:36:45.797: INFO: (2) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 5.858234ms) Apr 3 14:36:45.797: INFO: (2) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname1/proxy/: foo (200; 6.096165ms) Apr 3 14:36:45.797: INFO: (2) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 6.205717ms) Apr 3 14:36:45.797: INFO: (2) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname2/proxy/: bar (200; 6.136438ms) Apr 3 14:36:45.797: INFO: (2) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname1/proxy/: tls baz (200; 6.194234ms) Apr 3 14:36:45.797: INFO: (2) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname1/proxy/: foo (200; 6.450994ms) Apr 3 14:36:45.797: INFO: (2) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 6.480333ms) Apr 3 14:36:45.802: INFO: (3) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname1/proxy/: tls baz (200; 5.212911ms) Apr 3 14:36:45.802: INFO: (3) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 5.264952ms) Apr 3 14:36:45.803: INFO: (3) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 5.279688ms) Apr 3 14:36:45.803: INFO: (3) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:1080/proxy/: test<... (200; 5.401398ms) Apr 3 14:36:45.803: INFO: (3) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname2/proxy/: bar (200; 5.354338ms) Apr 3 14:36:45.803: INFO: (3) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 5.383341ms) Apr 3 14:36:45.803: INFO: (3) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname1/proxy/: foo (200; 5.377612ms) Apr 3 14:36:45.803: INFO: (3) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname2/proxy/: tls qux (200; 5.417485ms) Apr 3 14:36:45.803: INFO: (3) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname2/proxy/: bar (200; 5.433392ms) Apr 3 14:36:45.803: INFO: (3) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 5.505521ms) Apr 3 14:36:45.803: INFO: (3) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: ... (200; 6.862055ms) Apr 3 14:36:45.811: INFO: (4) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 5.787656ms) Apr 3 14:36:45.811: INFO: (4) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 6.00002ms) Apr 3 14:36:45.811: INFO: (4) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 5.738257ms) Apr 3 14:36:45.811: INFO: (4) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 5.986433ms) Apr 3 14:36:45.811: INFO: (4) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:1080/proxy/: ... (200; 6.012195ms) Apr 3 14:36:45.811: INFO: (4) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: test<... (200; 6.764562ms) Apr 3 14:36:45.812: INFO: (4) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname1/proxy/: foo (200; 6.109179ms) Apr 3 14:36:45.812: INFO: (4) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 6.711353ms) Apr 3 14:36:45.812: INFO: (4) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname1/proxy/: tls baz (200; 6.921567ms) Apr 3 14:36:45.812: INFO: (4) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname2/proxy/: tls qux (200; 7.7874ms) Apr 3 14:36:45.812: INFO: (4) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 8.104821ms) Apr 3 14:36:45.813: INFO: (4) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 8.158544ms) Apr 3 14:36:45.813: INFO: (4) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname2/proxy/: bar (200; 7.642972ms) Apr 3 14:36:45.813: INFO: (4) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname2/proxy/: bar (200; 7.998267ms) Apr 3 14:36:45.817: INFO: (5) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:1080/proxy/: test<... (200; 3.325992ms) Apr 3 14:36:45.817: INFO: (5) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 3.383447ms) Apr 3 14:36:45.817: INFO: (5) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 3.743663ms) Apr 3 14:36:45.817: INFO: (5) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 3.975839ms) Apr 3 14:36:45.817: INFO: (5) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: ... (200; 4.139811ms) Apr 3 14:36:45.817: INFO: (5) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 4.116008ms) Apr 3 14:36:45.818: INFO: (5) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 4.339886ms) Apr 3 14:36:45.818: INFO: (5) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname1/proxy/: tls baz (200; 4.606267ms) Apr 3 14:36:45.818: INFO: (5) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 4.626728ms) Apr 3 14:36:45.818: INFO: (5) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname1/proxy/: foo (200; 4.692296ms) Apr 3 14:36:45.818: INFO: (5) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname1/proxy/: foo (200; 4.657923ms) Apr 3 14:36:45.818: INFO: (5) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname2/proxy/: bar (200; 4.723945ms) Apr 3 14:36:45.818: INFO: (5) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname2/proxy/: bar (200; 4.776085ms) Apr 3 14:36:45.818: INFO: (5) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname2/proxy/: tls qux (200; 4.709643ms) Apr 3 14:36:45.820: INFO: (6) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 2.424665ms) Apr 3 14:36:45.820: INFO: (6) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:1080/proxy/: test<... (200; 2.418727ms) Apr 3 14:36:45.820: INFO: (6) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 2.362182ms) Apr 3 14:36:45.822: INFO: (6) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:1080/proxy/: ... (200; 3.720276ms) Apr 3 14:36:45.822: INFO: (6) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 3.783519ms) Apr 3 14:36:45.822: INFO: (6) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 4.059534ms) Apr 3 14:36:45.822: INFO: (6) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 4.146391ms) Apr 3 14:36:45.822: INFO: (6) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: test (200; 3.026715ms) Apr 3 14:36:45.827: INFO: (7) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: ... (200; 5.783296ms) Apr 3 14:36:45.829: INFO: (7) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 5.768835ms) Apr 3 14:36:45.829: INFO: (7) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 5.854782ms) Apr 3 14:36:45.829: INFO: (7) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:1080/proxy/: test<... (200; 5.849213ms) Apr 3 14:36:45.830: INFO: (7) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 6.610914ms) Apr 3 14:36:45.831: INFO: (7) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname1/proxy/: foo (200; 7.238558ms) Apr 3 14:36:45.831: INFO: (7) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname1/proxy/: foo (200; 7.271731ms) Apr 3 14:36:45.831: INFO: (7) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname2/proxy/: bar (200; 7.632707ms) Apr 3 14:36:45.831: INFO: (7) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname2/proxy/: tls qux (200; 7.791289ms) Apr 3 14:36:45.831: INFO: (7) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname1/proxy/: tls baz (200; 7.756207ms) Apr 3 14:36:45.831: INFO: (7) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname2/proxy/: bar (200; 7.812314ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 4.03455ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname2/proxy/: bar (200; 4.195189ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname2/proxy/: tls qux (200; 4.162829ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 4.1985ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname1/proxy/: tls baz (200; 4.178648ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 4.248866ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:1080/proxy/: test<... (200; 4.301066ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 4.327131ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname2/proxy/: bar (200; 4.315618ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname1/proxy/: foo (200; 4.291099ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:1080/proxy/: ... (200; 4.31822ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 4.321438ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 4.52093ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname1/proxy/: foo (200; 4.598118ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 4.641823ms) Apr 3 14:36:45.836: INFO: (8) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: ... (200; 3.752762ms) Apr 3 14:36:45.840: INFO: (9) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 3.774044ms) Apr 3 14:36:45.841: INFO: (9) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:1080/proxy/: test<... (200; 4.453866ms) Apr 3 14:36:45.842: INFO: (9) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 5.117657ms) Apr 3 14:36:45.842: INFO: (9) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 5.274982ms) Apr 3 14:36:45.842: INFO: (9) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 5.527609ms) Apr 3 14:36:45.842: INFO: (9) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 5.465928ms) Apr 3 14:36:45.843: INFO: (9) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: test (200; 3.519584ms) Apr 3 14:36:45.848: INFO: (10) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:1080/proxy/: test<... (200; 3.754275ms) Apr 3 14:36:45.848: INFO: (10) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:1080/proxy/: ... (200; 3.823567ms) Apr 3 14:36:45.848: INFO: (10) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 3.837584ms) Apr 3 14:36:45.848: INFO: (10) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: ... (200; 2.929084ms) Apr 3 14:36:45.852: INFO: (11) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 3.192595ms) Apr 3 14:36:45.853: INFO: (11) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 3.326028ms) Apr 3 14:36:45.853: INFO: (11) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 3.41973ms) Apr 3 14:36:45.853: INFO: (11) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:1080/proxy/: test<... (200; 3.448837ms) Apr 3 14:36:45.853: INFO: (11) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 3.627163ms) Apr 3 14:36:45.853: INFO: (11) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 3.663588ms) Apr 3 14:36:45.853: INFO: (11) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 3.670545ms) Apr 3 14:36:45.853: INFO: (11) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 3.965735ms) Apr 3 14:36:45.853: INFO: (11) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: ... (200; 4.375998ms) Apr 3 14:36:45.859: INFO: (12) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 4.59951ms) Apr 3 14:36:45.859: INFO: (12) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:1080/proxy/: test<... (200; 4.619068ms) Apr 3 14:36:45.860: INFO: (12) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname1/proxy/: tls baz (200; 4.734604ms) Apr 3 14:36:45.860: INFO: (12) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname2/proxy/: bar (200; 4.771833ms) Apr 3 14:36:45.860: INFO: (12) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 4.698831ms) Apr 3 14:36:45.860: INFO: (12) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 5.580106ms) Apr 3 14:36:45.860: INFO: (12) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 5.518487ms) Apr 3 14:36:45.860: INFO: (12) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 5.492273ms) Apr 3 14:36:45.861: INFO: (12) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname2/proxy/: bar (200; 6.529287ms) Apr 3 14:36:45.862: INFO: (12) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname1/proxy/: foo (200; 6.686051ms) Apr 3 14:36:45.862: INFO: (12) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname2/proxy/: tls qux (200; 6.671911ms) Apr 3 14:36:45.862: INFO: (12) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname1/proxy/: foo (200; 6.723995ms) Apr 3 14:36:45.864: INFO: (13) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 2.237629ms) Apr 3 14:36:45.865: INFO: (13) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 3.56949ms) Apr 3 14:36:45.866: INFO: (13) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:1080/proxy/: test<... (200; 3.907724ms) Apr 3 14:36:45.866: INFO: (13) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 4.340858ms) Apr 3 14:36:45.866: INFO: (13) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname1/proxy/: foo (200; 4.490048ms) Apr 3 14:36:45.866: INFO: (13) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:1080/proxy/: ... (200; 4.582679ms) Apr 3 14:36:45.866: INFO: (13) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 4.600339ms) Apr 3 14:36:45.866: INFO: (13) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: ... (200; 3.397341ms) Apr 3 14:36:45.870: INFO: (14) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:1080/proxy/: test<... (200; 3.241646ms) Apr 3 14:36:45.870: INFO: (14) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 3.557076ms) Apr 3 14:36:45.871: INFO: (14) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 3.288523ms) Apr 3 14:36:45.871: INFO: (14) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 3.632374ms) Apr 3 14:36:45.871: INFO: (14) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 3.528399ms) Apr 3 14:36:45.871: INFO: (14) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 3.905759ms) Apr 3 14:36:45.871: INFO: (14) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 3.742345ms) Apr 3 14:36:45.871: INFO: (14) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 3.574789ms) Apr 3 14:36:45.871: INFO: (14) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: ... (200; 8.978866ms) Apr 3 14:36:45.882: INFO: (15) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 8.979285ms) Apr 3 14:36:45.882: INFO: (15) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:1080/proxy/: test<... (200; 9.281216ms) Apr 3 14:36:45.882: INFO: (15) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 9.349244ms) Apr 3 14:36:45.882: INFO: (15) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 9.449203ms) Apr 3 14:36:45.882: INFO: (15) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 9.688403ms) Apr 3 14:36:45.882: INFO: (15) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 9.637479ms) Apr 3 14:36:45.883: INFO: (15) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: test<... (200; 2.070851ms) Apr 3 14:36:45.886: INFO: (16) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 2.92189ms) Apr 3 14:36:45.887: INFO: (16) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 3.381775ms) Apr 3 14:36:45.887: INFO: (16) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:1080/proxy/: ... (200; 3.492081ms) Apr 3 14:36:45.887: INFO: (16) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 3.499607ms) Apr 3 14:36:45.887: INFO: (16) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 3.564809ms) Apr 3 14:36:45.887: INFO: (16) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 3.530345ms) Apr 3 14:36:45.887: INFO: (16) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 3.524533ms) Apr 3 14:36:45.887: INFO: (16) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 3.598937ms) Apr 3 14:36:45.887: INFO: (16) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: test (200; 4.881741ms) Apr 3 14:36:45.893: INFO: (17) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname1/proxy/: foo (200; 5.064753ms) Apr 3 14:36:45.893: INFO: (17) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname2/proxy/: bar (200; 5.098855ms) Apr 3 14:36:45.893: INFO: (17) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:1080/proxy/: ... (200; 5.152862ms) Apr 3 14:36:45.893: INFO: (17) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 5.149555ms) Apr 3 14:36:45.893: INFO: (17) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: test<... (200; 5.209605ms) Apr 3 14:36:45.894: INFO: (17) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 5.315255ms) Apr 3 14:36:45.894: INFO: (17) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname2/proxy/: tls qux (200; 5.364824ms) Apr 3 14:36:45.894: INFO: (17) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname1/proxy/: tls baz (200; 5.46893ms) Apr 3 14:36:45.894: INFO: (17) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 5.487701ms) Apr 3 14:36:45.894: INFO: (17) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 5.552344ms) Apr 3 14:36:45.894: INFO: (17) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname2/proxy/: bar (200; 5.466784ms) Apr 3 14:36:45.897: INFO: (18) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:1080/proxy/: test<... (200; 2.47199ms) Apr 3 14:36:45.897: INFO: (18) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 2.765157ms) Apr 3 14:36:45.897: INFO: (18) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:1080/proxy/: ... (200; 2.938031ms) Apr 3 14:36:45.897: INFO: (18) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 2.998106ms) Apr 3 14:36:45.898: INFO: (18) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 4.230279ms) Apr 3 14:36:45.898: INFO: (18) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 4.393432ms) Apr 3 14:36:45.898: INFO: (18) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 4.279171ms) Apr 3 14:36:45.898: INFO: (18) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb:160/proxy/: foo (200; 4.397267ms) Apr 3 14:36:45.898: INFO: (18) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 4.284312ms) Apr 3 14:36:45.899: INFO: (18) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: ... (200; 4.338833ms) Apr 3 14:36:45.904: INFO: (19) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname2/proxy/: bar (200; 4.358281ms) Apr 3 14:36:45.905: INFO: (19) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:460/proxy/: tls baz (200; 4.576085ms) Apr 3 14:36:45.905: INFO: (19) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname1/proxy/: foo (200; 4.993326ms) Apr 3 14:36:45.905: INFO: (19) /api/v1/namespaces/proxy-7770/services/http:proxy-service-gvrrk:portname2/proxy/: bar (200; 5.112518ms) Apr 3 14:36:45.905: INFO: (19) /api/v1/namespaces/proxy-7770/services/proxy-service-gvrrk:portname1/proxy/: foo (200; 5.027941ms) Apr 3 14:36:45.905: INFO: (19) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:462/proxy/: tls qux (200; 5.204609ms) Apr 3 14:36:45.906: INFO: (19) /api/v1/namespaces/proxy-7770/pods/https:proxy-service-gvrrk-xrnjb:443/proxy/: test<... (200; 5.5633ms) Apr 3 14:36:45.906: INFO: (19) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname2/proxy/: tls qux (200; 5.631806ms) Apr 3 14:36:45.906: INFO: (19) /api/v1/namespaces/proxy-7770/services/https:proxy-service-gvrrk:tlsportname1/proxy/: tls baz (200; 5.612498ms) Apr 3 14:36:45.906: INFO: (19) /api/v1/namespaces/proxy-7770/pods/proxy-service-gvrrk-xrnjb/proxy/: test (200; 5.584125ms) Apr 3 14:36:45.906: INFO: (19) /api/v1/namespaces/proxy-7770/pods/http:proxy-service-gvrrk-xrnjb:162/proxy/: bar (200; 5.655339ms) STEP: deleting ReplicationController proxy-service-gvrrk in namespace proxy-7770, will wait for the garbage collector to delete the pods Apr 3 14:36:45.965: INFO: Deleting ReplicationController proxy-service-gvrrk took: 7.685084ms Apr 3 14:36:46.266: INFO: Terminating ReplicationController proxy-service-gvrrk pods took: 300.234444ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:36:48.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-7770" for this suite. Apr 3 14:36:54.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:36:54.374: INFO: namespace proxy-7770 deletion completed in 6.103857528s • [SLOW TEST:17.778 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:36:54.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-mb6z STEP: Creating a pod to test atomic-volume-subpath Apr 3 14:36:54.462: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-mb6z" in namespace "subpath-3122" to be "success or failure" Apr 3 14:36:54.466: INFO: Pod "pod-subpath-test-secret-mb6z": Phase="Pending", Reason="", readiness=false. Elapsed: 3.501697ms Apr 3 14:36:56.469: INFO: Pod "pod-subpath-test-secret-mb6z": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007138105s Apr 3 14:36:58.474: INFO: Pod "pod-subpath-test-secret-mb6z": Phase="Running", Reason="", readiness=true. Elapsed: 4.011367388s Apr 3 14:37:00.478: INFO: Pod "pod-subpath-test-secret-mb6z": Phase="Running", Reason="", readiness=true. Elapsed: 6.015390674s Apr 3 14:37:02.482: INFO: Pod "pod-subpath-test-secret-mb6z": Phase="Running", Reason="", readiness=true. Elapsed: 8.019524348s Apr 3 14:37:04.486: INFO: Pod "pod-subpath-test-secret-mb6z": Phase="Running", Reason="", readiness=true. Elapsed: 10.023751556s Apr 3 14:37:06.490: INFO: Pod "pod-subpath-test-secret-mb6z": Phase="Running", Reason="", readiness=true. Elapsed: 12.027855553s Apr 3 14:37:08.494: INFO: Pod "pod-subpath-test-secret-mb6z": Phase="Running", Reason="", readiness=true. Elapsed: 14.031887883s Apr 3 14:37:10.499: INFO: Pod "pod-subpath-test-secret-mb6z": Phase="Running", Reason="", readiness=true. Elapsed: 16.03629484s Apr 3 14:37:12.503: INFO: Pod "pod-subpath-test-secret-mb6z": Phase="Running", Reason="", readiness=true. Elapsed: 18.040347725s Apr 3 14:37:14.506: INFO: Pod "pod-subpath-test-secret-mb6z": Phase="Running", Reason="", readiness=true. Elapsed: 20.043993306s Apr 3 14:37:16.511: INFO: Pod "pod-subpath-test-secret-mb6z": Phase="Running", Reason="", readiness=true. Elapsed: 22.048423094s Apr 3 14:37:18.515: INFO: Pod "pod-subpath-test-secret-mb6z": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.052453935s STEP: Saw pod success Apr 3 14:37:18.515: INFO: Pod "pod-subpath-test-secret-mb6z" satisfied condition "success or failure" Apr 3 14:37:18.518: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-secret-mb6z container test-container-subpath-secret-mb6z: STEP: delete the pod Apr 3 14:37:18.552: INFO: Waiting for pod pod-subpath-test-secret-mb6z to disappear Apr 3 14:37:18.562: INFO: Pod pod-subpath-test-secret-mb6z no longer exists STEP: Deleting pod pod-subpath-test-secret-mb6z Apr 3 14:37:18.562: INFO: Deleting pod "pod-subpath-test-secret-mb6z" in namespace "subpath-3122" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:37:18.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3122" for this suite. Apr 3 14:37:24.579: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:37:24.655: INFO: namespace subpath-3122 deletion completed in 6.086758468s • [SLOW TEST:30.281 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:37:24.656: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-77d40268-3715-463f-9a89-7eff92538b17 STEP: Creating a pod to test consume secrets Apr 3 14:37:24.723: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-018b30e0-c2dc-485f-8da8-c717d1a09856" in namespace "projected-4606" to be "success or failure" Apr 3 14:37:24.726: INFO: Pod "pod-projected-secrets-018b30e0-c2dc-485f-8da8-c717d1a09856": Phase="Pending", Reason="", readiness=false. Elapsed: 3.284336ms Apr 3 14:37:26.731: INFO: Pod "pod-projected-secrets-018b30e0-c2dc-485f-8da8-c717d1a09856": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007505461s Apr 3 14:37:28.734: INFO: Pod "pod-projected-secrets-018b30e0-c2dc-485f-8da8-c717d1a09856": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011021597s STEP: Saw pod success Apr 3 14:37:28.734: INFO: Pod "pod-projected-secrets-018b30e0-c2dc-485f-8da8-c717d1a09856" satisfied condition "success or failure" Apr 3 14:37:28.736: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-018b30e0-c2dc-485f-8da8-c717d1a09856 container projected-secret-volume-test: STEP: delete the pod Apr 3 14:37:28.752: INFO: Waiting for pod pod-projected-secrets-018b30e0-c2dc-485f-8da8-c717d1a09856 to disappear Apr 3 14:37:28.756: INFO: Pod pod-projected-secrets-018b30e0-c2dc-485f-8da8-c717d1a09856 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:37:28.756: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4606" for this suite. Apr 3 14:37:34.772: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:37:34.848: INFO: namespace projected-4606 deletion completed in 6.089035276s • [SLOW TEST:10.193 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:37:34.849: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-cbf61f25-e1a5-4677-99b5-375c69c53e6d STEP: Creating a pod to test consume secrets Apr 3 14:37:34.908: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b36e74d1-63c2-4852-af7f-4871f00454f5" in namespace "projected-3050" to be "success or failure" Apr 3 14:37:34.913: INFO: Pod "pod-projected-secrets-b36e74d1-63c2-4852-af7f-4871f00454f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.757125ms Apr 3 14:37:36.917: INFO: Pod "pod-projected-secrets-b36e74d1-63c2-4852-af7f-4871f00454f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008970499s Apr 3 14:37:38.929: INFO: Pod "pod-projected-secrets-b36e74d1-63c2-4852-af7f-4871f00454f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020248968s STEP: Saw pod success Apr 3 14:37:38.929: INFO: Pod "pod-projected-secrets-b36e74d1-63c2-4852-af7f-4871f00454f5" satisfied condition "success or failure" Apr 3 14:37:38.931: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-b36e74d1-63c2-4852-af7f-4871f00454f5 container projected-secret-volume-test: STEP: delete the pod Apr 3 14:37:38.959: INFO: Waiting for pod pod-projected-secrets-b36e74d1-63c2-4852-af7f-4871f00454f5 to disappear Apr 3 14:37:38.963: INFO: Pod pod-projected-secrets-b36e74d1-63c2-4852-af7f-4871f00454f5 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:37:38.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3050" for this suite. Apr 3 14:37:44.990: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:37:45.063: INFO: namespace projected-3050 deletion completed in 6.097781613s • [SLOW TEST:10.215 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:37:45.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Apr 3 14:37:53.175: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 3 14:37:53.196: INFO: Pod pod-with-poststart-http-hook still exists Apr 3 14:37:55.196: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 3 14:37:55.200: INFO: Pod pod-with-poststart-http-hook still exists Apr 3 14:37:57.196: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 3 14:37:57.200: INFO: Pod pod-with-poststart-http-hook still exists Apr 3 14:37:59.196: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 3 14:37:59.200: INFO: Pod pod-with-poststart-http-hook still exists Apr 3 14:38:01.196: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 3 14:38:01.200: INFO: Pod pod-with-poststart-http-hook still exists Apr 3 14:38:03.196: INFO: Waiting for pod pod-with-poststart-http-hook to disappear Apr 3 14:38:03.200: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:38:03.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-7325" for this suite. Apr 3 14:38:25.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:38:25.292: INFO: namespace container-lifecycle-hook-7325 deletion completed in 22.08625619s • [SLOW TEST:40.228 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:38:25.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 3 14:38:25.368: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5325' Apr 3 14:38:27.822: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Apr 3 14:38:27.822: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created Apr 3 14:38:27.826: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 Apr 3 14:38:27.830: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller Apr 3 14:38:27.884: INFO: scanned /root for discovery docs: Apr 3 14:38:27.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5325' Apr 3 14:38:43.691: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Apr 3 14:38:43.691: INFO: stdout: "Created e2e-test-nginx-rc-842b96a054ae20ae04f185741e327d4c\nScaling up e2e-test-nginx-rc-842b96a054ae20ae04f185741e327d4c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-842b96a054ae20ae04f185741e327d4c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-842b96a054ae20ae04f185741e327d4c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" Apr 3 14:38:43.691: INFO: stdout: "Created e2e-test-nginx-rc-842b96a054ae20ae04f185741e327d4c\nScaling up e2e-test-nginx-rc-842b96a054ae20ae04f185741e327d4c from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-842b96a054ae20ae04f185741e327d4c up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-842b96a054ae20ae04f185741e327d4c to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. Apr 3 14:38:43.691: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5325' Apr 3 14:38:43.778: INFO: stderr: "" Apr 3 14:38:43.779: INFO: stdout: "e2e-test-nginx-rc-842b96a054ae20ae04f185741e327d4c-65vn2 " Apr 3 14:38:43.779: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-842b96a054ae20ae04f185741e327d4c-65vn2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5325' Apr 3 14:38:43.868: INFO: stderr: "" Apr 3 14:38:43.868: INFO: stdout: "true" Apr 3 14:38:43.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-842b96a054ae20ae04f185741e327d4c-65vn2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5325' Apr 3 14:38:43.962: INFO: stderr: "" Apr 3 14:38:43.962: INFO: stdout: "docker.io/library/nginx:1.14-alpine" Apr 3 14:38:43.962: INFO: e2e-test-nginx-rc-842b96a054ae20ae04f185741e327d4c-65vn2 is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 Apr 3 14:38:43.962: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5325' Apr 3 14:38:44.062: INFO: stderr: "" Apr 3 14:38:44.062: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:38:44.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5325" for this suite. Apr 3 14:38:50.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:38:50.179: INFO: namespace kubectl-5325 deletion completed in 6.104107102s • [SLOW TEST:24.887 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:38:50.180: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Apr 3 14:38:50.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-9357' Apr 3 14:38:50.348: INFO: stderr: "" Apr 3 14:38:50.348: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created Apr 3 14:38:55.399: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-9357 -o json' Apr 3 14:38:55.501: INFO: stderr: "" Apr 3 14:38:55.501: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-04-03T14:38:50Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-9357\",\n \"resourceVersion\": \"3412530\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-9357/pods/e2e-test-nginx-pod\",\n \"uid\": \"8ac053bc-d404-478f-8eca-739d6d3c107a\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-bl8fc\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-bl8fc\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-bl8fc\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-03T14:38:50Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-03T14:38:53Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-03T14:38:53Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-04-03T14:38:50Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://968e8a9dc10cbc87e2a539081c3d9de89184a8107b177a551e7eb44ae85b45a8\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-04-03T14:38:52Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.5\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.123\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-04-03T14:38:50Z\"\n }\n}\n" STEP: replace the image in the pod Apr 3 14:38:55.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-9357' Apr 3 14:38:55.775: INFO: stderr: "" Apr 3 14:38:55.775: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 Apr 3 14:38:55.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-9357' Apr 3 14:38:58.518: INFO: stderr: "" Apr 3 14:38:58.518: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:38:58.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9357" for this suite. Apr 3 14:39:04.532: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:39:04.622: INFO: namespace kubectl-9357 deletion completed in 6.100135173s • [SLOW TEST:14.442 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:39:04.622: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object Apr 3 14:39:04.710: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2995,SelfLink:/api/v1/namespaces/watch-2995/configmaps/e2e-watch-test-label-changed,UID:78b1a271-6398-4b61-b9e5-e539bcd4ba2b,ResourceVersion:3412577,Generation:0,CreationTimestamp:2020-04-03 14:39:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Apr 3 14:39:04.710: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2995,SelfLink:/api/v1/namespaces/watch-2995/configmaps/e2e-watch-test-label-changed,UID:78b1a271-6398-4b61-b9e5-e539bcd4ba2b,ResourceVersion:3412578,Generation:0,CreationTimestamp:2020-04-03 14:39:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Apr 3 14:39:04.710: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2995,SelfLink:/api/v1/namespaces/watch-2995/configmaps/e2e-watch-test-label-changed,UID:78b1a271-6398-4b61-b9e5-e539bcd4ba2b,ResourceVersion:3412579,Generation:0,CreationTimestamp:2020-04-03 14:39:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored Apr 3 14:39:14.782: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2995,SelfLink:/api/v1/namespaces/watch-2995/configmaps/e2e-watch-test-label-changed,UID:78b1a271-6398-4b61-b9e5-e539bcd4ba2b,ResourceVersion:3412600,Generation:0,CreationTimestamp:2020-04-03 14:39:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Apr 3 14:39:14.782: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2995,SelfLink:/api/v1/namespaces/watch-2995/configmaps/e2e-watch-test-label-changed,UID:78b1a271-6398-4b61-b9e5-e539bcd4ba2b,ResourceVersion:3412601,Generation:0,CreationTimestamp:2020-04-03 14:39:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} Apr 3 14:39:14.782: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-2995,SelfLink:/api/v1/namespaces/watch-2995/configmaps/e2e-watch-test-label-changed,UID:78b1a271-6398-4b61-b9e5-e539bcd4ba2b,ResourceVersion:3412602,Generation:0,CreationTimestamp:2020-04-03 14:39:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:39:14.783: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2995" for this suite. Apr 3 14:39:20.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:39:20.876: INFO: namespace watch-2995 deletion completed in 6.088907142s • [SLOW TEST:16.254 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:39:20.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-1646/configmap-test-f06647e9-7e3c-4837-b5e5-9593f532d512 STEP: Creating a pod to test consume configMaps Apr 3 14:39:20.956: INFO: Waiting up to 5m0s for pod "pod-configmaps-b930f675-0e78-43f6-8975-92a8c13b2094" in namespace "configmap-1646" to be "success or failure" Apr 3 14:39:20.995: INFO: Pod "pod-configmaps-b930f675-0e78-43f6-8975-92a8c13b2094": Phase="Pending", Reason="", readiness=false. Elapsed: 38.699754ms Apr 3 14:39:22.999: INFO: Pod "pod-configmaps-b930f675-0e78-43f6-8975-92a8c13b2094": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042888818s Apr 3 14:39:25.007: INFO: Pod "pod-configmaps-b930f675-0e78-43f6-8975-92a8c13b2094": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050680493s STEP: Saw pod success Apr 3 14:39:25.007: INFO: Pod "pod-configmaps-b930f675-0e78-43f6-8975-92a8c13b2094" satisfied condition "success or failure" Apr 3 14:39:25.010: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-b930f675-0e78-43f6-8975-92a8c13b2094 container env-test: STEP: delete the pod Apr 3 14:39:25.029: INFO: Waiting for pod pod-configmaps-b930f675-0e78-43f6-8975-92a8c13b2094 to disappear Apr 3 14:39:25.034: INFO: Pod pod-configmaps-b930f675-0e78-43f6-8975-92a8c13b2094 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:39:25.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-1646" for this suite. Apr 3 14:39:31.050: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:39:31.132: INFO: namespace configmap-1646 deletion completed in 6.094207975s • [SLOW TEST:10.255 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:39:31.132: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc Apr 3 14:39:31.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4068' Apr 3 14:39:31.458: INFO: stderr: "" Apr 3 14:39:31.458: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. Apr 3 14:39:32.463: INFO: Selector matched 1 pods for map[app:redis] Apr 3 14:39:32.463: INFO: Found 0 / 1 Apr 3 14:39:33.463: INFO: Selector matched 1 pods for map[app:redis] Apr 3 14:39:33.463: INFO: Found 0 / 1 Apr 3 14:39:34.463: INFO: Selector matched 1 pods for map[app:redis] Apr 3 14:39:34.463: INFO: Found 0 / 1 Apr 3 14:39:35.463: INFO: Selector matched 1 pods for map[app:redis] Apr 3 14:39:35.463: INFO: Found 1 / 1 Apr 3 14:39:35.463: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Apr 3 14:39:35.466: INFO: Selector matched 1 pods for map[app:redis] Apr 3 14:39:35.466: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings Apr 3 14:39:35.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-k6bhq redis-master --namespace=kubectl-4068' Apr 3 14:39:35.573: INFO: stderr: "" Apr 3 14:39:35.573: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 03 Apr 14:39:33.930 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 Apr 14:39:33.930 # Server started, Redis version 3.2.12\n1:M 03 Apr 14:39:33.930 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 Apr 14:39:33.930 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines Apr 3 14:39:35.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-k6bhq redis-master --namespace=kubectl-4068 --tail=1' Apr 3 14:39:35.675: INFO: stderr: "" Apr 3 14:39:35.675: INFO: stdout: "1:M 03 Apr 14:39:33.930 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes Apr 3 14:39:35.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-k6bhq redis-master --namespace=kubectl-4068 --limit-bytes=1' Apr 3 14:39:35.782: INFO: stderr: "" Apr 3 14:39:35.782: INFO: stdout: " " STEP: exposing timestamps Apr 3 14:39:35.782: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-k6bhq redis-master --namespace=kubectl-4068 --tail=1 --timestamps' Apr 3 14:39:35.885: INFO: stderr: "" Apr 3 14:39:35.885: INFO: stdout: "2020-04-03T14:39:33.931092436Z 1:M 03 Apr 14:39:33.930 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range Apr 3 14:39:38.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-k6bhq redis-master --namespace=kubectl-4068 --since=1s' Apr 3 14:39:38.490: INFO: stderr: "" Apr 3 14:39:38.490: INFO: stdout: "" Apr 3 14:39:38.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-k6bhq redis-master --namespace=kubectl-4068 --since=24h' Apr 3 14:39:38.602: INFO: stderr: "" Apr 3 14:39:38.602: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 03 Apr 14:39:33.930 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 03 Apr 14:39:33.930 # Server started, Redis version 3.2.12\n1:M 03 Apr 14:39:33.930 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 03 Apr 14:39:33.930 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources Apr 3 14:39:38.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4068' Apr 3 14:39:38.710: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Apr 3 14:39:38.710: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" Apr 3 14:39:38.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-4068' Apr 3 14:39:38.826: INFO: stderr: "No resources found.\n" Apr 3 14:39:38.826: INFO: stdout: "" Apr 3 14:39:38.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-4068 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Apr 3 14:39:38.964: INFO: stderr: "" Apr 3 14:39:38.965: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:39:38.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4068" for this suite. Apr 3 14:40:00.991: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:40:01.079: INFO: namespace kubectl-4068 deletion completed in 22.105071629s • [SLOW TEST:29.948 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:40:01.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0403 14:40:31.715559 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Apr 3 14:40:31.715: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:40:31.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9893" for this suite. Apr 3 14:40:37.757: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:40:37.823: INFO: namespace gc-9893 deletion completed in 6.1051863s • [SLOW TEST:36.743 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:40:37.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:40:42.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1038" for this suite. Apr 3 14:40:48.067: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:40:48.150: INFO: namespace emptydir-wrapper-1038 deletion completed in 6.099107013s • [SLOW TEST:10.326 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:40:48.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod Apr 3 14:40:52.752: INFO: Successfully updated pod "pod-update-9155b970-c472-443e-a6d1-7202bd5bf6cd" STEP: verifying the updated pod is in kubernetes Apr 3 14:40:52.773: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:40:52.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9087" for this suite. Apr 3 14:41:14.808: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:41:14.878: INFO: namespace pods-9087 deletion completed in 22.078876834s • [SLOW TEST:26.726 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:41:14.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition Apr 3 14:41:14.912: INFO: Waiting up to 5m0s for pod "var-expansion-0d7b20a7-4d50-4717-bbc3-b7e0c152ccee" in namespace "var-expansion-841" to be "success or failure" Apr 3 14:41:14.927: INFO: Pod "var-expansion-0d7b20a7-4d50-4717-bbc3-b7e0c152ccee": Phase="Pending", Reason="", readiness=false. Elapsed: 15.140073ms Apr 3 14:41:16.931: INFO: Pod "var-expansion-0d7b20a7-4d50-4717-bbc3-b7e0c152ccee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019062387s Apr 3 14:41:18.936: INFO: Pod "var-expansion-0d7b20a7-4d50-4717-bbc3-b7e0c152ccee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02356579s STEP: Saw pod success Apr 3 14:41:18.936: INFO: Pod "var-expansion-0d7b20a7-4d50-4717-bbc3-b7e0c152ccee" satisfied condition "success or failure" Apr 3 14:41:18.939: INFO: Trying to get logs from node iruya-worker pod var-expansion-0d7b20a7-4d50-4717-bbc3-b7e0c152ccee container dapi-container: STEP: delete the pod Apr 3 14:41:19.005: INFO: Waiting for pod var-expansion-0d7b20a7-4d50-4717-bbc3-b7e0c152ccee to disappear Apr 3 14:41:19.013: INFO: Pod var-expansion-0d7b20a7-4d50-4717-bbc3-b7e0c152ccee no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:41:19.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-841" for this suite. Apr 3 14:41:25.029: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:41:25.115: INFO: namespace var-expansion-841 deletion completed in 6.099608915s • [SLOW TEST:10.237 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:41:25.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-c77a54ec-c5e5-4dd2-a6d1-16fc246b9c1f STEP: Creating a pod to test consume secrets Apr 3 14:41:25.195: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ee392387-26be-4dcc-aa44-0587478905ff" in namespace "projected-3123" to be "success or failure" Apr 3 14:41:25.199: INFO: Pod "pod-projected-secrets-ee392387-26be-4dcc-aa44-0587478905ff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.382554ms Apr 3 14:41:27.203: INFO: Pod "pod-projected-secrets-ee392387-26be-4dcc-aa44-0587478905ff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008563706s Apr 3 14:41:29.207: INFO: Pod "pod-projected-secrets-ee392387-26be-4dcc-aa44-0587478905ff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012429682s STEP: Saw pod success Apr 3 14:41:29.207: INFO: Pod "pod-projected-secrets-ee392387-26be-4dcc-aa44-0587478905ff" satisfied condition "success or failure" Apr 3 14:41:29.210: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-ee392387-26be-4dcc-aa44-0587478905ff container projected-secret-volume-test: STEP: delete the pod Apr 3 14:41:29.231: INFO: Waiting for pod pod-projected-secrets-ee392387-26be-4dcc-aa44-0587478905ff to disappear Apr 3 14:41:29.236: INFO: Pod pod-projected-secrets-ee392387-26be-4dcc-aa44-0587478905ff no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:41:29.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3123" for this suite. Apr 3 14:41:35.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:41:35.344: INFO: namespace projected-3123 deletion completed in 6.104546486s • [SLOW TEST:10.228 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Apr 3 14:41:35.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5969.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-5969.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5969.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-5969.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-5969.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5969.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Apr 3 14:41:41.450: INFO: DNS probes using dns-5969/dns-test-1450f451-5c9e-4d1c-93e9-58a8f9da34b1 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Apr 3 14:41:41.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5969" for this suite. Apr 3 14:41:47.544: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Apr 3 14:41:47.629: INFO: namespace dns-5969 deletion completed in 6.127403788s • [SLOW TEST:12.285 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SApr 3 14:41:47.629: INFO: Running AfterSuite actions on all nodes Apr 3 14:41:47.629: INFO: Running AfterSuite actions on node 1 Apr 3 14:41:47.629: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6365.407 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS