I0517 12:55:47.012721 6 e2e.go:243] Starting e2e run "82956068-7451-4359-83b5-1c8de8d8a513" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1589720146 - Will randomize all specs Will run 215 of 4412 specs May 17 12:55:47.193: INFO: >>> kubeConfig: /root/.kube/config May 17 12:55:47.196: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 17 12:55:47.219: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 17 12:55:47.245: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 17 12:55:47.245: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 17 12:55:47.245: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 17 12:55:47.254: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 17 12:55:47.254: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 17 12:55:47.254: INFO: e2e test version: v1.15.11 May 17 12:55:47.255: INFO: kube-apiserver version: v1.15.7 SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 12:55:47.255: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir May 17 12:55:47.308: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 17 12:55:47.316: INFO: Waiting up to 5m0s for pod "pod-864aed87-fed2-4ce0-970b-a9c92eec6eaa" in namespace "emptydir-5543" to be "success or failure" May 17 12:55:47.320: INFO: Pod "pod-864aed87-fed2-4ce0-970b-a9c92eec6eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 3.931667ms May 17 12:55:49.612: INFO: Pod "pod-864aed87-fed2-4ce0-970b-a9c92eec6eaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.295419764s May 17 12:55:51.616: INFO: Pod "pod-864aed87-fed2-4ce0-970b-a9c92eec6eaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.2997786s STEP: Saw pod success May 17 12:55:51.616: INFO: Pod "pod-864aed87-fed2-4ce0-970b-a9c92eec6eaa" satisfied condition "success or failure" May 17 12:55:51.619: INFO: Trying to get logs from node iruya-worker2 pod pod-864aed87-fed2-4ce0-970b-a9c92eec6eaa container test-container: STEP: delete the pod May 17 12:55:51.700: INFO: Waiting for pod pod-864aed87-fed2-4ce0-970b-a9c92eec6eaa to disappear May 17 12:55:51.705: INFO: Pod pod-864aed87-fed2-4ce0-970b-a9c92eec6eaa no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 12:55:51.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5543" for this suite. May 17 12:55:57.720: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 12:55:57.814: INFO: namespace emptydir-5543 deletion completed in 6.105505742s • [SLOW TEST:10.559 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 12:55:57.814: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 17 12:56:02.447: INFO: Successfully updated pod "pod-update-e4eb2e13-96b5-46cd-b7f2-0557fb2d594f" STEP: verifying the updated pod is in kubernetes May 17 12:56:02.456: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 12:56:02.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2284" for this suite. May 17 12:56:24.677: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 12:56:24.755: INFO: namespace pods-2284 deletion completed in 22.295403898s • [SLOW TEST:26.941 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 12:56:24.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-4.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 12:56:32.935: INFO: DNS probes using dns-4/dns-test-616b58d6-82ff-4202-a34e-eb73413d1b72 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 12:56:32.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-4" for this suite. May 17 12:56:39.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 12:56:39.117: INFO: namespace dns-4 deletion completed in 6.141727333s • [SLOW TEST:14.361 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 12:56:39.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 12:56:39.164: INFO: Creating ReplicaSet my-hostname-basic-d4e48306-2387-45f5-b169-858025feeb2a May 17 12:56:39.173: INFO: Pod name my-hostname-basic-d4e48306-2387-45f5-b169-858025feeb2a: Found 0 pods out of 1 May 17 12:56:44.178: INFO: Pod name my-hostname-basic-d4e48306-2387-45f5-b169-858025feeb2a: Found 1 pods out of 1 May 17 12:56:44.178: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d4e48306-2387-45f5-b169-858025feeb2a" is running May 17 12:56:44.180: INFO: Pod "my-hostname-basic-d4e48306-2387-45f5-b169-858025feeb2a-2q9c9" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 12:56:39 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 12:56:42 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 12:56:42 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 12:56:39 +0000 UTC Reason: Message:}]) May 17 12:56:44.181: INFO: Trying to dial the pod May 17 12:56:49.193: INFO: Controller my-hostname-basic-d4e48306-2387-45f5-b169-858025feeb2a: Got expected result from replica 1 [my-hostname-basic-d4e48306-2387-45f5-b169-858025feeb2a-2q9c9]: "my-hostname-basic-d4e48306-2387-45f5-b169-858025feeb2a-2q9c9", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 12:56:49.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-6750" for this suite. May 17 12:56:55.216: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 12:56:55.287: INFO: namespace replicaset-6750 deletion completed in 6.090912901s • [SLOW TEST:16.171 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 12:56:55.288: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 12:56:55.359: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 12:56:59.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-3838" for this suite. May 17 12:57:45.547: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 12:57:45.665: INFO: namespace pods-3838 deletion completed in 46.145023689s • [SLOW TEST:50.377 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 12:57:45.665: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-c4a2a9da-3944-426a-afc2-b4928d552b6e STEP: Creating a pod to test consume secrets May 17 12:57:45.746: INFO: Waiting up to 5m0s for pod "pod-secrets-f1c644fd-0882-4703-84ce-064bb25c777e" in namespace "secrets-4024" to be "success or failure" May 17 12:57:45.749: INFO: Pod "pod-secrets-f1c644fd-0882-4703-84ce-064bb25c777e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.083623ms May 17 12:57:47.753: INFO: Pod "pod-secrets-f1c644fd-0882-4703-84ce-064bb25c777e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007589878s May 17 12:57:49.758: INFO: Pod "pod-secrets-f1c644fd-0882-4703-84ce-064bb25c777e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012243483s STEP: Saw pod success May 17 12:57:49.758: INFO: Pod "pod-secrets-f1c644fd-0882-4703-84ce-064bb25c777e" satisfied condition "success or failure" May 17 12:57:49.761: INFO: Trying to get logs from node iruya-worker pod pod-secrets-f1c644fd-0882-4703-84ce-064bb25c777e container secret-volume-test: STEP: delete the pod May 17 12:57:49.799: INFO: Waiting for pod pod-secrets-f1c644fd-0882-4703-84ce-064bb25c777e to disappear May 17 12:57:49.815: INFO: Pod pod-secrets-f1c644fd-0882-4703-84ce-064bb25c777e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 12:57:49.815: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4024" for this suite. May 17 12:57:55.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 12:57:55.913: INFO: namespace secrets-4024 deletion completed in 6.0951347s • [SLOW TEST:10.248 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 12:57:55.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-3c7fbc2c-4eac-4c66-b669-4f30c695563f STEP: Creating a pod to test consume configMaps May 17 12:57:55.978: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-5fe98876-14cf-498d-b1a7-43deddf891ce" in namespace "projected-5346" to be "success or failure" May 17 12:57:55.991: INFO: Pod "pod-projected-configmaps-5fe98876-14cf-498d-b1a7-43deddf891ce": Phase="Pending", Reason="", readiness=false. Elapsed: 12.685811ms May 17 12:57:57.996: INFO: Pod "pod-projected-configmaps-5fe98876-14cf-498d-b1a7-43deddf891ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017753228s May 17 12:58:00.000: INFO: Pod "pod-projected-configmaps-5fe98876-14cf-498d-b1a7-43deddf891ce": Phase="Running", Reason="", readiness=true. Elapsed: 4.021985489s May 17 12:58:02.005: INFO: Pod "pod-projected-configmaps-5fe98876-14cf-498d-b1a7-43deddf891ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026291702s STEP: Saw pod success May 17 12:58:02.005: INFO: Pod "pod-projected-configmaps-5fe98876-14cf-498d-b1a7-43deddf891ce" satisfied condition "success or failure" May 17 12:58:02.008: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-5fe98876-14cf-498d-b1a7-43deddf891ce container projected-configmap-volume-test: STEP: delete the pod May 17 12:58:02.119: INFO: Waiting for pod pod-projected-configmaps-5fe98876-14cf-498d-b1a7-43deddf891ce to disappear May 17 12:58:02.123: INFO: Pod pod-projected-configmaps-5fe98876-14cf-498d-b1a7-43deddf891ce no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 12:58:02.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5346" for this suite. May 17 12:58:08.142: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 12:58:08.223: INFO: namespace projected-5346 deletion completed in 6.096288918s • [SLOW TEST:12.310 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 12:58:08.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-046039bf-6fff-4f64-b9e8-2f9739d52395 in namespace container-probe-9462 May 17 12:58:12.298: INFO: Started pod liveness-046039bf-6fff-4f64-b9e8-2f9739d52395 in namespace container-probe-9462 STEP: checking the pod's current state and verifying that restartCount is present May 17 12:58:12.300: INFO: Initial restart count of pod liveness-046039bf-6fff-4f64-b9e8-2f9739d52395 is 0 May 17 12:58:32.348: INFO: Restart count of pod container-probe-9462/liveness-046039bf-6fff-4f64-b9e8-2f9739d52395 is now 1 (20.04775571s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 12:58:32.386: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9462" for this suite. May 17 12:58:38.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 12:58:38.525: INFO: namespace container-probe-9462 deletion completed in 6.121885101s • [SLOW TEST:30.301 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 12:58:38.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 12:58:38.607: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 17 12:58:38.646: INFO: Pod name sample-pod: Found 0 pods out of 1 May 17 12:58:43.651: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 17 12:58:43.651: INFO: Creating deployment "test-rolling-update-deployment" May 17 12:58:43.655: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 17 12:58:43.664: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 17 12:58:45.806: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 17 12:58:45.808: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725317123, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725317123, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725317123, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725317123, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 12:58:47.812: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 17 12:58:47.820: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-3919,SelfLink:/apis/apps/v1/namespaces/deployment-3919/deployments/test-rolling-update-deployment,UID:9abe47eb-885a-406c-8103-706b1d2e103d,ResourceVersion:11390472,Generation:1,CreationTimestamp:2020-05-17 12:58:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-17 12:58:43 +0000 UTC 2020-05-17 12:58:43 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-17 12:58:47 +0000 UTC 2020-05-17 12:58:43 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 17 12:58:47.824: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-3919,SelfLink:/apis/apps/v1/namespaces/deployment-3919/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:3d9bfd67-0272-4847-bb89-738265a80539,ResourceVersion:11390461,Generation:1,CreationTimestamp:2020-05-17 12:58:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9abe47eb-885a-406c-8103-706b1d2e103d 0xc0026a6667 0xc0026a6668}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 17 12:58:47.824: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 17 12:58:47.824: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-3919,SelfLink:/apis/apps/v1/namespaces/deployment-3919/replicasets/test-rolling-update-controller,UID:1aeee526-d676-4f57-8bbb-9a865114a432,ResourceVersion:11390470,Generation:2,CreationTimestamp:2020-05-17 12:58:38 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 9abe47eb-885a-406c-8103-706b1d2e103d 0xc0026a6597 0xc0026a6598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 17 12:58:47.827: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-28qzw" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-28qzw,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-3919,SelfLink:/api/v1/namespaces/deployment-3919/pods/test-rolling-update-deployment-79f6b9d75c-28qzw,UID:c574d1c8-2123-408a-be71-e0895dc58f36,ResourceVersion:11390460,Generation:0,CreationTimestamp:2020-05-17 12:58:43 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 3d9bfd67-0272-4847-bb89-738265a80539 0xc0026a6f67 0xc0026a6f68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-cp9jh {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-cp9jh,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-cp9jh true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0026a6fe0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0026a7000}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 12:58:43 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 12:58:47 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 12:58:47 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 12:58:43 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.131,StartTime:2020-05-17 12:58:43 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-17 12:58:46 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://a24a3ffcce076924ca46c31c8811ec2516c2e432d854000bd7e69dcbb857c2f5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 12:58:47.827: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3919" for this suite. May 17 12:58:53.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 12:58:54.042: INFO: namespace deployment-3919 deletion completed in 6.211437777s • [SLOW TEST:15.516 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 12:58:54.042: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 17 12:58:54.115: INFO: Waiting up to 5m0s for pod "pod-0f0d267e-9cb3-4d39-bb85-61db3b32dea4" in namespace "emptydir-1071" to be "success or failure" May 17 12:58:54.135: INFO: Pod "pod-0f0d267e-9cb3-4d39-bb85-61db3b32dea4": Phase="Pending", Reason="", readiness=false. Elapsed: 19.276256ms May 17 12:58:56.139: INFO: Pod "pod-0f0d267e-9cb3-4d39-bb85-61db3b32dea4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02349256s May 17 12:58:58.144: INFO: Pod "pod-0f0d267e-9cb3-4d39-bb85-61db3b32dea4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028266636s STEP: Saw pod success May 17 12:58:58.144: INFO: Pod "pod-0f0d267e-9cb3-4d39-bb85-61db3b32dea4" satisfied condition "success or failure" May 17 12:58:58.147: INFO: Trying to get logs from node iruya-worker2 pod pod-0f0d267e-9cb3-4d39-bb85-61db3b32dea4 container test-container: STEP: delete the pod May 17 12:58:58.179: INFO: Waiting for pod pod-0f0d267e-9cb3-4d39-bb85-61db3b32dea4 to disappear May 17 12:58:58.187: INFO: Pod pod-0f0d267e-9cb3-4d39-bb85-61db3b32dea4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 12:58:58.187: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1071" for this suite. May 17 12:59:04.227: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 12:59:04.299: INFO: namespace emptydir-1071 deletion completed in 6.089174435s • [SLOW TEST:10.257 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 12:59:04.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating replication controller svc-latency-rc in namespace svc-latency-1271 I0517 12:59:04.403660 6 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-1271, replica count: 1 I0517 12:59:05.454128 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 12:59:06.454314 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 12:59:07.454493 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 12:59:08.454687 6 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 17 12:59:08.602: INFO: Created: latency-svc-52s6h May 17 12:59:08.613: INFO: Got endpoints: latency-svc-52s6h [58.103344ms] May 17 12:59:08.712: INFO: Created: latency-svc-h2q94 May 17 12:59:08.715: INFO: Got endpoints: latency-svc-h2q94 [102.048298ms] May 17 12:59:08.786: INFO: Created: latency-svc-5zxjl May 17 12:59:08.806: INFO: Got endpoints: latency-svc-5zxjl [193.079392ms] May 17 12:59:08.886: INFO: Created: latency-svc-k29sw May 17 12:59:08.895: INFO: Got endpoints: latency-svc-k29sw [282.332716ms] May 17 12:59:08.943: INFO: Created: latency-svc-rcftg May 17 12:59:08.967: INFO: Got endpoints: latency-svc-rcftg [354.554855ms] May 17 12:59:09.030: INFO: Created: latency-svc-pfphq May 17 12:59:09.033: INFO: Got endpoints: latency-svc-pfphq [420.597382ms] May 17 12:59:09.062: INFO: Created: latency-svc-v4kn4 May 17 12:59:09.088: INFO: Got endpoints: latency-svc-v4kn4 [475.311718ms] May 17 12:59:09.110: INFO: Created: latency-svc-c6nbj May 17 12:59:09.197: INFO: Got endpoints: latency-svc-c6nbj [583.771006ms] May 17 12:59:09.200: INFO: Created: latency-svc-qcrqq May 17 12:59:09.209: INFO: Got endpoints: latency-svc-qcrqq [596.038752ms] May 17 12:59:09.230: INFO: Created: latency-svc-lxnxd May 17 12:59:09.244: INFO: Got endpoints: latency-svc-lxnxd [631.204256ms] May 17 12:59:09.266: INFO: Created: latency-svc-cplkc May 17 12:59:09.274: INFO: Got endpoints: latency-svc-cplkc [661.409769ms] May 17 12:59:09.335: INFO: Created: latency-svc-26djn May 17 12:59:09.338: INFO: Got endpoints: latency-svc-26djn [724.785927ms] May 17 12:59:09.375: INFO: Created: latency-svc-6jxnk May 17 12:59:09.389: INFO: Got endpoints: latency-svc-6jxnk [776.079028ms] May 17 12:59:09.418: INFO: Created: latency-svc-tcj9n May 17 12:59:09.431: INFO: Got endpoints: latency-svc-tcj9n [817.875154ms] May 17 12:59:09.505: INFO: Created: latency-svc-n6m2z May 17 12:59:09.506: INFO: Got endpoints: latency-svc-n6m2z [893.159646ms] May 17 12:59:09.550: INFO: Created: latency-svc-mhg9l May 17 12:59:09.578: INFO: Got endpoints: latency-svc-mhg9l [965.376451ms] May 17 12:59:09.653: INFO: Created: latency-svc-cf9zd May 17 12:59:09.656: INFO: Got endpoints: latency-svc-cf9zd [940.64564ms] May 17 12:59:09.685: INFO: Created: latency-svc-c4fpb May 17 12:59:09.702: INFO: Got endpoints: latency-svc-c4fpb [896.391014ms] May 17 12:59:09.740: INFO: Created: latency-svc-rgzgj May 17 12:59:09.750: INFO: Got endpoints: latency-svc-rgzgj [855.234274ms] May 17 12:59:09.789: INFO: Created: latency-svc-jlngh May 17 12:59:09.805: INFO: Got endpoints: latency-svc-jlngh [837.280756ms] May 17 12:59:09.824: INFO: Created: latency-svc-bnns9 May 17 12:59:09.841: INFO: Got endpoints: latency-svc-bnns9 [807.838204ms] May 17 12:59:09.860: INFO: Created: latency-svc-7c2l6 May 17 12:59:09.872: INFO: Got endpoints: latency-svc-7c2l6 [783.243852ms] May 17 12:59:09.922: INFO: Created: latency-svc-tjfw2 May 17 12:59:09.926: INFO: Got endpoints: latency-svc-tjfw2 [728.762129ms] May 17 12:59:09.986: INFO: Created: latency-svc-9b2qp May 17 12:59:10.010: INFO: Got endpoints: latency-svc-9b2qp [801.378357ms] May 17 12:59:10.067: INFO: Created: latency-svc-d8spj May 17 12:59:10.068: INFO: Got endpoints: latency-svc-d8spj [824.123681ms] May 17 12:59:10.094: INFO: Created: latency-svc-4qz28 May 17 12:59:10.119: INFO: Got endpoints: latency-svc-4qz28 [844.484314ms] May 17 12:59:10.160: INFO: Created: latency-svc-5t8xz May 17 12:59:10.191: INFO: Got endpoints: latency-svc-5t8xz [853.179901ms] May 17 12:59:10.208: INFO: Created: latency-svc-7gsb2 May 17 12:59:10.221: INFO: Got endpoints: latency-svc-7gsb2 [831.933856ms] May 17 12:59:10.244: INFO: Created: latency-svc-6ct5f May 17 12:59:10.257: INFO: Got endpoints: latency-svc-6ct5f [826.423057ms] May 17 12:59:10.286: INFO: Created: latency-svc-rf8n8 May 17 12:59:10.323: INFO: Got endpoints: latency-svc-rf8n8 [816.927376ms] May 17 12:59:10.339: INFO: Created: latency-svc-c59c6 May 17 12:59:10.370: INFO: Got endpoints: latency-svc-c59c6 [791.50195ms] May 17 12:59:10.412: INFO: Created: latency-svc-5htg6 May 17 12:59:10.448: INFO: Got endpoints: latency-svc-5htg6 [792.619352ms] May 17 12:59:10.472: INFO: Created: latency-svc-h7h4s May 17 12:59:10.505: INFO: Got endpoints: latency-svc-h7h4s [802.050209ms] May 17 12:59:10.545: INFO: Created: latency-svc-tqfgn May 17 12:59:10.616: INFO: Got endpoints: latency-svc-tqfgn [865.735285ms] May 17 12:59:10.619: INFO: Created: latency-svc-zt888 May 17 12:59:10.643: INFO: Got endpoints: latency-svc-zt888 [837.948637ms] May 17 12:59:10.713: INFO: Created: latency-svc-6b66h May 17 12:59:10.790: INFO: Got endpoints: latency-svc-6b66h [948.819978ms] May 17 12:59:10.793: INFO: Created: latency-svc-pfsn2 May 17 12:59:10.799: INFO: Got endpoints: latency-svc-pfsn2 [926.863717ms] May 17 12:59:10.838: INFO: Created: latency-svc-rjgcs May 17 12:59:10.875: INFO: Got endpoints: latency-svc-rjgcs [949.236514ms] May 17 12:59:10.941: INFO: Created: latency-svc-fnhkv May 17 12:59:10.962: INFO: Got endpoints: latency-svc-fnhkv [951.114497ms] May 17 12:59:11.000: INFO: Created: latency-svc-kg84j May 17 12:59:11.059: INFO: Got endpoints: latency-svc-kg84j [990.860139ms] May 17 12:59:11.062: INFO: Created: latency-svc-mc257 May 17 12:59:11.075: INFO: Got endpoints: latency-svc-mc257 [956.435841ms] May 17 12:59:11.096: INFO: Created: latency-svc-p2f7d May 17 12:59:11.112: INFO: Got endpoints: latency-svc-p2f7d [920.75134ms] May 17 12:59:11.134: INFO: Created: latency-svc-gfh7v May 17 12:59:11.148: INFO: Got endpoints: latency-svc-gfh7v [926.857484ms] May 17 12:59:11.198: INFO: Created: latency-svc-tkvzh May 17 12:59:11.202: INFO: Got endpoints: latency-svc-tkvzh [944.563833ms] May 17 12:59:11.234: INFO: Created: latency-svc-2vl4t May 17 12:59:11.263: INFO: Got endpoints: latency-svc-2vl4t [940.156307ms] May 17 12:59:11.293: INFO: Created: latency-svc-f8pcb May 17 12:59:11.353: INFO: Got endpoints: latency-svc-f8pcb [983.199348ms] May 17 12:59:11.355: INFO: Created: latency-svc-jfztl May 17 12:59:11.359: INFO: Got endpoints: latency-svc-jfztl [96.001174ms] May 17 12:59:11.384: INFO: Created: latency-svc-fvqzt May 17 12:59:11.402: INFO: Got endpoints: latency-svc-fvqzt [953.673644ms] May 17 12:59:11.419: INFO: Created: latency-svc-7rczf May 17 12:59:11.432: INFO: Got endpoints: latency-svc-7rczf [926.777617ms] May 17 12:59:11.491: INFO: Created: latency-svc-lq4qv May 17 12:59:11.494: INFO: Got endpoints: latency-svc-lq4qv [877.541122ms] May 17 12:59:11.522: INFO: Created: latency-svc-lnsk8 May 17 12:59:11.534: INFO: Got endpoints: latency-svc-lnsk8 [890.935434ms] May 17 12:59:11.553: INFO: Created: latency-svc-stnpn May 17 12:59:11.582: INFO: Got endpoints: latency-svc-stnpn [791.973109ms] May 17 12:59:11.640: INFO: Created: latency-svc-v2q2h May 17 12:59:11.655: INFO: Got endpoints: latency-svc-v2q2h [856.048903ms] May 17 12:59:11.677: INFO: Created: latency-svc-4dtcx May 17 12:59:11.691: INFO: Got endpoints: latency-svc-4dtcx [816.306622ms] May 17 12:59:11.713: INFO: Created: latency-svc-n4kq9 May 17 12:59:11.727: INFO: Got endpoints: latency-svc-n4kq9 [765.322398ms] May 17 12:59:11.780: INFO: Created: latency-svc-b5r9q May 17 12:59:11.793: INFO: Got endpoints: latency-svc-b5r9q [733.981021ms] May 17 12:59:11.817: INFO: Created: latency-svc-xrshv May 17 12:59:11.829: INFO: Got endpoints: latency-svc-xrshv [753.901174ms] May 17 12:59:11.852: INFO: Created: latency-svc-7frth May 17 12:59:11.866: INFO: Got endpoints: latency-svc-7frth [753.786166ms] May 17 12:59:11.923: INFO: Created: latency-svc-znj2n May 17 12:59:11.926: INFO: Got endpoints: latency-svc-znj2n [777.837465ms] May 17 12:59:11.989: INFO: Created: latency-svc-mwpsq May 17 12:59:12.020: INFO: Got endpoints: latency-svc-mwpsq [817.779563ms] May 17 12:59:12.084: INFO: Created: latency-svc-l9d26 May 17 12:59:12.087: INFO: Got endpoints: latency-svc-l9d26 [733.618284ms] May 17 12:59:12.115: INFO: Created: latency-svc-6tdsk May 17 12:59:12.132: INFO: Got endpoints: latency-svc-6tdsk [772.398633ms] May 17 12:59:12.164: INFO: Created: latency-svc-9tg4x May 17 12:59:12.180: INFO: Got endpoints: latency-svc-9tg4x [777.728605ms] May 17 12:59:12.222: INFO: Created: latency-svc-47wsw May 17 12:59:12.225: INFO: Got endpoints: latency-svc-47wsw [792.881259ms] May 17 12:59:12.248: INFO: Created: latency-svc-h78bk May 17 12:59:12.270: INFO: Got endpoints: latency-svc-h78bk [776.442971ms] May 17 12:59:12.290: INFO: Created: latency-svc-xp9x9 May 17 12:59:12.312: INFO: Got endpoints: latency-svc-xp9x9 [778.486761ms] May 17 12:59:12.389: INFO: Created: latency-svc-nwtgf May 17 12:59:12.392: INFO: Got endpoints: latency-svc-nwtgf [810.21834ms] May 17 12:59:12.423: INFO: Created: latency-svc-s94f7 May 17 12:59:12.438: INFO: Got endpoints: latency-svc-s94f7 [783.808892ms] May 17 12:59:12.459: INFO: Created: latency-svc-98gnc May 17 12:59:12.469: INFO: Got endpoints: latency-svc-98gnc [777.525442ms] May 17 12:59:12.488: INFO: Created: latency-svc-gzvtp May 17 12:59:12.533: INFO: Got endpoints: latency-svc-gzvtp [805.913746ms] May 17 12:59:12.541: INFO: Created: latency-svc-kq8vp May 17 12:59:12.559: INFO: Got endpoints: latency-svc-kq8vp [766.019455ms] May 17 12:59:12.595: INFO: Created: latency-svc-8zqng May 17 12:59:12.608: INFO: Got endpoints: latency-svc-8zqng [778.097418ms] May 17 12:59:12.683: INFO: Created: latency-svc-hdxf7 May 17 12:59:12.716: INFO: Got endpoints: latency-svc-hdxf7 [850.67872ms] May 17 12:59:12.781: INFO: Created: latency-svc-pldb5 May 17 12:59:12.844: INFO: Got endpoints: latency-svc-pldb5 [917.851532ms] May 17 12:59:12.903: INFO: Created: latency-svc-c4fbd May 17 12:59:12.927: INFO: Got endpoints: latency-svc-c4fbd [906.679026ms] May 17 12:59:13.000: INFO: Created: latency-svc-nn9c2 May 17 12:59:13.004: INFO: Got endpoints: latency-svc-nn9c2 [916.612513ms] May 17 12:59:13.064: INFO: Created: latency-svc-fzb4t May 17 12:59:13.089: INFO: Got endpoints: latency-svc-fzb4t [957.276884ms] May 17 12:59:13.136: INFO: Created: latency-svc-sg8jr May 17 12:59:13.154: INFO: Got endpoints: latency-svc-sg8jr [974.491617ms] May 17 12:59:13.177: INFO: Created: latency-svc-pf7tz May 17 12:59:13.191: INFO: Got endpoints: latency-svc-pf7tz [965.454468ms] May 17 12:59:13.216: INFO: Created: latency-svc-tnx8c May 17 12:59:13.269: INFO: Got endpoints: latency-svc-tnx8c [998.451316ms] May 17 12:59:13.286: INFO: Created: latency-svc-mgzb8 May 17 12:59:13.299: INFO: Got endpoints: latency-svc-mgzb8 [986.446773ms] May 17 12:59:13.323: INFO: Created: latency-svc-x6gf9 May 17 12:59:13.347: INFO: Got endpoints: latency-svc-x6gf9 [954.728306ms] May 17 12:59:13.434: INFO: Created: latency-svc-fwmmg May 17 12:59:13.443: INFO: Got endpoints: latency-svc-fwmmg [1.004436041s] May 17 12:59:13.471: INFO: Created: latency-svc-h9qw8 May 17 12:59:13.480: INFO: Got endpoints: latency-svc-h9qw8 [1.0106852s] May 17 12:59:13.503: INFO: Created: latency-svc-9s7k2 May 17 12:59:13.516: INFO: Got endpoints: latency-svc-9s7k2 [982.628312ms] May 17 12:59:13.577: INFO: Created: latency-svc-m6ncv May 17 12:59:13.580: INFO: Got endpoints: latency-svc-m6ncv [1.020896138s] May 17 12:59:13.610: INFO: Created: latency-svc-cxc28 May 17 12:59:13.624: INFO: Got endpoints: latency-svc-cxc28 [1.016756432s] May 17 12:59:13.645: INFO: Created: latency-svc-sv68f May 17 12:59:13.661: INFO: Got endpoints: latency-svc-sv68f [945.073021ms] May 17 12:59:13.730: INFO: Created: latency-svc-kwzwz May 17 12:59:13.733: INFO: Got endpoints: latency-svc-kwzwz [889.398303ms] May 17 12:59:13.790: INFO: Created: latency-svc-52gcn May 17 12:59:13.805: INFO: Got endpoints: latency-svc-52gcn [878.686748ms] May 17 12:59:13.827: INFO: Created: latency-svc-5gvk2 May 17 12:59:13.880: INFO: Got endpoints: latency-svc-5gvk2 [876.003949ms] May 17 12:59:13.897: INFO: Created: latency-svc-lkjcr May 17 12:59:13.920: INFO: Got endpoints: latency-svc-lkjcr [831.440333ms] May 17 12:59:13.952: INFO: Created: latency-svc-g6cvj May 17 12:59:14.023: INFO: Got endpoints: latency-svc-g6cvj [869.062814ms] May 17 12:59:14.048: INFO: Created: latency-svc-pb9pt May 17 12:59:14.078: INFO: Got endpoints: latency-svc-pb9pt [886.917051ms] May 17 12:59:14.112: INFO: Created: latency-svc-jkldr May 17 12:59:14.179: INFO: Got endpoints: latency-svc-jkldr [910.094735ms] May 17 12:59:14.199: INFO: Created: latency-svc-ddf4v May 17 12:59:14.214: INFO: Got endpoints: latency-svc-ddf4v [915.545798ms] May 17 12:59:14.234: INFO: Created: latency-svc-zjwwj May 17 12:59:14.252: INFO: Got endpoints: latency-svc-zjwwj [904.613373ms] May 17 12:59:14.271: INFO: Created: latency-svc-94b7r May 17 12:59:14.316: INFO: Got endpoints: latency-svc-94b7r [873.411164ms] May 17 12:59:14.328: INFO: Created: latency-svc-rtn6r May 17 12:59:14.348: INFO: Got endpoints: latency-svc-rtn6r [868.561871ms] May 17 12:59:14.654: INFO: Created: latency-svc-crcgd May 17 12:59:14.659: INFO: Got endpoints: latency-svc-crcgd [1.143151481s] May 17 12:59:14.725: INFO: Created: latency-svc-5zkj2 May 17 12:59:14.749: INFO: Got endpoints: latency-svc-5zkj2 [1.169063321s] May 17 12:59:14.833: INFO: Created: latency-svc-5kxt6 May 17 12:59:14.839: INFO: Got endpoints: latency-svc-5kxt6 [1.214348222s] May 17 12:59:14.862: INFO: Created: latency-svc-4v5v4 May 17 12:59:14.888: INFO: Got endpoints: latency-svc-4v5v4 [1.226435298s] May 17 12:59:14.994: INFO: Created: latency-svc-wsxwx May 17 12:59:15.002: INFO: Got endpoints: latency-svc-wsxwx [1.268743604s] May 17 12:59:15.049: INFO: Created: latency-svc-spc8n May 17 12:59:15.074: INFO: Got endpoints: latency-svc-spc8n [1.26851457s] May 17 12:59:15.149: INFO: Created: latency-svc-8sdmp May 17 12:59:15.158: INFO: Got endpoints: latency-svc-8sdmp [1.278334284s] May 17 12:59:15.183: INFO: Created: latency-svc-qxp4m May 17 12:59:15.196: INFO: Got endpoints: latency-svc-qxp4m [1.275269376s] May 17 12:59:15.224: INFO: Created: latency-svc-65lht May 17 12:59:15.239: INFO: Got endpoints: latency-svc-65lht [1.215172705s] May 17 12:59:15.287: INFO: Created: latency-svc-mpmbm May 17 12:59:15.307: INFO: Got endpoints: latency-svc-mpmbm [1.229260042s] May 17 12:59:15.336: INFO: Created: latency-svc-6s48q May 17 12:59:15.361: INFO: Got endpoints: latency-svc-6s48q [1.18154559s] May 17 12:59:15.419: INFO: Created: latency-svc-jbqh2 May 17 12:59:15.427: INFO: Got endpoints: latency-svc-jbqh2 [1.212468391s] May 17 12:59:15.458: INFO: Created: latency-svc-gmzlc May 17 12:59:15.488: INFO: Got endpoints: latency-svc-gmzlc [1.235599495s] May 17 12:59:15.518: INFO: Created: latency-svc-zp7vq May 17 12:59:15.562: INFO: Got endpoints: latency-svc-zp7vq [1.245852559s] May 17 12:59:15.576: INFO: Created: latency-svc-lbcnq May 17 12:59:15.593: INFO: Got endpoints: latency-svc-lbcnq [1.24519678s] May 17 12:59:15.612: INFO: Created: latency-svc-ln8zd May 17 12:59:15.630: INFO: Got endpoints: latency-svc-ln8zd [970.562016ms] May 17 12:59:15.730: INFO: Created: latency-svc-vsf8b May 17 12:59:15.735: INFO: Got endpoints: latency-svc-vsf8b [985.427646ms] May 17 12:59:15.770: INFO: Created: latency-svc-5s5fk May 17 12:59:15.786: INFO: Got endpoints: latency-svc-5s5fk [947.296443ms] May 17 12:59:15.804: INFO: Created: latency-svc-l6br6 May 17 12:59:15.816: INFO: Got endpoints: latency-svc-l6br6 [928.076534ms] May 17 12:59:15.905: INFO: Created: latency-svc-j65fm May 17 12:59:15.907: INFO: Got endpoints: latency-svc-j65fm [905.053202ms] May 17 12:59:15.973: INFO: Created: latency-svc-qzx9x May 17 12:59:15.991: INFO: Got endpoints: latency-svc-qzx9x [916.95512ms] May 17 12:59:16.047: INFO: Created: latency-svc-7gpvx May 17 12:59:16.051: INFO: Got endpoints: latency-svc-7gpvx [892.714662ms] May 17 12:59:16.080: INFO: Created: latency-svc-9flfj May 17 12:59:16.179: INFO: Got endpoints: latency-svc-9flfj [982.962266ms] May 17 12:59:16.189: INFO: Created: latency-svc-qdtvk May 17 12:59:16.218: INFO: Got endpoints: latency-svc-qdtvk [979.268321ms] May 17 12:59:16.256: INFO: Created: latency-svc-zr4ml May 17 12:59:16.334: INFO: Got endpoints: latency-svc-zr4ml [1.027285192s] May 17 12:59:16.346: INFO: Created: latency-svc-26bhz May 17 12:59:16.366: INFO: Got endpoints: latency-svc-26bhz [1.004665234s] May 17 12:59:16.386: INFO: Created: latency-svc-8sgzs May 17 12:59:16.396: INFO: Got endpoints: latency-svc-8sgzs [968.970564ms] May 17 12:59:16.416: INFO: Created: latency-svc-ldhgd May 17 12:59:16.426: INFO: Got endpoints: latency-svc-ldhgd [938.626444ms] May 17 12:59:16.479: INFO: Created: latency-svc-bkc7n May 17 12:59:16.486: INFO: Got endpoints: latency-svc-bkc7n [924.10962ms] May 17 12:59:16.508: INFO: Created: latency-svc-qf2r9 May 17 12:59:16.523: INFO: Got endpoints: latency-svc-qf2r9 [929.30429ms] May 17 12:59:16.542: INFO: Created: latency-svc-hk4wz May 17 12:59:16.559: INFO: Got endpoints: latency-svc-hk4wz [929.40243ms] May 17 12:59:16.579: INFO: Created: latency-svc-j25c5 May 17 12:59:16.635: INFO: Got endpoints: latency-svc-j25c5 [899.785022ms] May 17 12:59:16.636: INFO: Created: latency-svc-7nkts May 17 12:59:16.643: INFO: Got endpoints: latency-svc-7nkts [857.15091ms] May 17 12:59:16.670: INFO: Created: latency-svc-mjv69 May 17 12:59:16.686: INFO: Got endpoints: latency-svc-mjv69 [870.358263ms] May 17 12:59:16.808: INFO: Created: latency-svc-lvlmt May 17 12:59:16.813: INFO: Got endpoints: latency-svc-lvlmt [906.33297ms] May 17 12:59:16.849: INFO: Created: latency-svc-9w5hz May 17 12:59:16.879: INFO: Got endpoints: latency-svc-9w5hz [887.850513ms] May 17 12:59:16.995: INFO: Created: latency-svc-zbgc8 May 17 12:59:16.996: INFO: Got endpoints: latency-svc-zbgc8 [945.287615ms] May 17 12:59:17.137: INFO: Created: latency-svc-kdpcz May 17 12:59:17.140: INFO: Got endpoints: latency-svc-kdpcz [961.208061ms] May 17 12:59:17.208: INFO: Created: latency-svc-t2jzl May 17 12:59:17.281: INFO: Got endpoints: latency-svc-t2jzl [1.062537615s] May 17 12:59:17.300: INFO: Created: latency-svc-294bt May 17 12:59:17.317: INFO: Got endpoints: latency-svc-294bt [982.535756ms] May 17 12:59:17.336: INFO: Created: latency-svc-ztjjg May 17 12:59:17.354: INFO: Got endpoints: latency-svc-ztjjg [988.121461ms] May 17 12:59:17.376: INFO: Created: latency-svc-z5tp9 May 17 12:59:17.442: INFO: Got endpoints: latency-svc-z5tp9 [1.046503494s] May 17 12:59:17.444: INFO: Created: latency-svc-ctq4d May 17 12:59:17.450: INFO: Got endpoints: latency-svc-ctq4d [1.023884527s] May 17 12:59:17.479: INFO: Created: latency-svc-2fm6s May 17 12:59:17.492: INFO: Got endpoints: latency-svc-2fm6s [1.005475289s] May 17 12:59:17.515: INFO: Created: latency-svc-r4jlx May 17 12:59:17.528: INFO: Got endpoints: latency-svc-r4jlx [1.005084686s] May 17 12:59:17.586: INFO: Created: latency-svc-r7vcf May 17 12:59:17.590: INFO: Got endpoints: latency-svc-r7vcf [1.03095593s] May 17 12:59:17.616: INFO: Created: latency-svc-sk8jr May 17 12:59:17.630: INFO: Got endpoints: latency-svc-sk8jr [995.436655ms] May 17 12:59:17.652: INFO: Created: latency-svc-h58n4 May 17 12:59:17.662: INFO: Got endpoints: latency-svc-h58n4 [1.018295792s] May 17 12:59:17.718: INFO: Created: latency-svc-5t8db May 17 12:59:17.721: INFO: Got endpoints: latency-svc-5t8db [1.03408847s] May 17 12:59:17.755: INFO: Created: latency-svc-scsmn May 17 12:59:17.770: INFO: Got endpoints: latency-svc-scsmn [956.278755ms] May 17 12:59:17.791: INFO: Created: latency-svc-sllsm May 17 12:59:17.807: INFO: Got endpoints: latency-svc-sllsm [927.871611ms] May 17 12:59:17.857: INFO: Created: latency-svc-8wv6h May 17 12:59:17.860: INFO: Got endpoints: latency-svc-8wv6h [863.819289ms] May 17 12:59:17.886: INFO: Created: latency-svc-74twh May 17 12:59:17.897: INFO: Got endpoints: latency-svc-74twh [756.543225ms] May 17 12:59:17.916: INFO: Created: latency-svc-8dkxh May 17 12:59:17.926: INFO: Got endpoints: latency-svc-8dkxh [645.740656ms] May 17 12:59:17.949: INFO: Created: latency-svc-5vzxz May 17 12:59:17.990: INFO: Got endpoints: latency-svc-5vzxz [673.131346ms] May 17 12:59:18.001: INFO: Created: latency-svc-c5k4b May 17 12:59:18.017: INFO: Got endpoints: latency-svc-c5k4b [663.501949ms] May 17 12:59:18.037: INFO: Created: latency-svc-rg8mj May 17 12:59:18.055: INFO: Got endpoints: latency-svc-rg8mj [612.440579ms] May 17 12:59:18.072: INFO: Created: latency-svc-cnk4t May 17 12:59:18.084: INFO: Got endpoints: latency-svc-cnk4t [633.994445ms] May 17 12:59:18.143: INFO: Created: latency-svc-vfqcr May 17 12:59:18.162: INFO: Got endpoints: latency-svc-vfqcr [670.266416ms] May 17 12:59:18.193: INFO: Created: latency-svc-w4gvr May 17 12:59:18.211: INFO: Got endpoints: latency-svc-w4gvr [682.883388ms] May 17 12:59:18.236: INFO: Created: latency-svc-phcmt May 17 12:59:18.281: INFO: Got endpoints: latency-svc-phcmt [690.658224ms] May 17 12:59:18.307: INFO: Created: latency-svc-hdjzr May 17 12:59:18.325: INFO: Got endpoints: latency-svc-hdjzr [694.948848ms] May 17 12:59:18.372: INFO: Created: latency-svc-r6x9d May 17 12:59:18.425: INFO: Got endpoints: latency-svc-r6x9d [762.884443ms] May 17 12:59:18.427: INFO: Created: latency-svc-zdd45 May 17 12:59:18.433: INFO: Got endpoints: latency-svc-zdd45 [712.127317ms] May 17 12:59:18.463: INFO: Created: latency-svc-xngmd May 17 12:59:18.475: INFO: Got endpoints: latency-svc-xngmd [705.522709ms] May 17 12:59:18.493: INFO: Created: latency-svc-759c5 May 17 12:59:18.506: INFO: Got endpoints: latency-svc-759c5 [698.875518ms] May 17 12:59:18.523: INFO: Created: latency-svc-p5qmb May 17 12:59:18.574: INFO: Got endpoints: latency-svc-p5qmb [714.211276ms] May 17 12:59:18.577: INFO: Created: latency-svc-44wpr May 17 12:59:18.584: INFO: Got endpoints: latency-svc-44wpr [687.462066ms] May 17 12:59:18.606: INFO: Created: latency-svc-fc5zb May 17 12:59:18.614: INFO: Got endpoints: latency-svc-fc5zb [687.935991ms] May 17 12:59:18.637: INFO: Created: latency-svc-26jsg May 17 12:59:18.651: INFO: Got endpoints: latency-svc-26jsg [660.932243ms] May 17 12:59:18.673: INFO: Created: latency-svc-qbmj6 May 17 12:59:18.718: INFO: Got endpoints: latency-svc-qbmj6 [701.072388ms] May 17 12:59:18.732: INFO: Created: latency-svc-km7cm May 17 12:59:18.747: INFO: Got endpoints: latency-svc-km7cm [692.405448ms] May 17 12:59:18.806: INFO: Created: latency-svc-xnbwb May 17 12:59:18.862: INFO: Got endpoints: latency-svc-xnbwb [777.31214ms] May 17 12:59:18.907: INFO: Created: latency-svc-s4qsr May 17 12:59:18.922: INFO: Got endpoints: latency-svc-s4qsr [759.918662ms] May 17 12:59:19.030: INFO: Created: latency-svc-lnbcs May 17 12:59:19.037: INFO: Got endpoints: latency-svc-lnbcs [825.734748ms] May 17 12:59:19.087: INFO: Created: latency-svc-2h7fp May 17 12:59:19.103: INFO: Got endpoints: latency-svc-2h7fp [822.042432ms] May 17 12:59:19.123: INFO: Created: latency-svc-vfxm6 May 17 12:59:19.161: INFO: Got endpoints: latency-svc-vfxm6 [835.577455ms] May 17 12:59:19.195: INFO: Created: latency-svc-97pxr May 17 12:59:19.211: INFO: Got endpoints: latency-svc-97pxr [785.933808ms] May 17 12:59:19.236: INFO: Created: latency-svc-h7k4q May 17 12:59:19.253: INFO: Got endpoints: latency-svc-h7k4q [820.270502ms] May 17 12:59:19.300: INFO: Created: latency-svc-scftx May 17 12:59:19.344: INFO: Got endpoints: latency-svc-scftx [869.222228ms] May 17 12:59:19.347: INFO: Created: latency-svc-fh4qj May 17 12:59:19.375: INFO: Got endpoints: latency-svc-fh4qj [868.836324ms] May 17 12:59:19.437: INFO: Created: latency-svc-bzjqs May 17 12:59:19.452: INFO: Got endpoints: latency-svc-bzjqs [877.371685ms] May 17 12:59:19.483: INFO: Created: latency-svc-m25c7 May 17 12:59:19.506: INFO: Got endpoints: latency-svc-m25c7 [921.601788ms] May 17 12:59:19.536: INFO: Created: latency-svc-t8467 May 17 12:59:19.595: INFO: Got endpoints: latency-svc-t8467 [980.731929ms] May 17 12:59:19.621: INFO: Created: latency-svc-5hvzn May 17 12:59:19.638: INFO: Got endpoints: latency-svc-5hvzn [987.166254ms] May 17 12:59:19.656: INFO: Created: latency-svc-zsrns May 17 12:59:19.668: INFO: Got endpoints: latency-svc-zsrns [950.048203ms] May 17 12:59:19.692: INFO: Created: latency-svc-fx72b May 17 12:59:19.730: INFO: Got endpoints: latency-svc-fx72b [982.433234ms] May 17 12:59:19.747: INFO: Created: latency-svc-6cbtw May 17 12:59:19.760: INFO: Got endpoints: latency-svc-6cbtw [898.066504ms] May 17 12:59:19.795: INFO: Created: latency-svc-g6w2r May 17 12:59:19.808: INFO: Got endpoints: latency-svc-g6w2r [885.717364ms] May 17 12:59:19.826: INFO: Created: latency-svc-zg2nb May 17 12:59:19.869: INFO: Got endpoints: latency-svc-zg2nb [832.769101ms] May 17 12:59:19.883: INFO: Created: latency-svc-x98hd May 17 12:59:19.898: INFO: Got endpoints: latency-svc-x98hd [795.482371ms] May 17 12:59:19.919: INFO: Created: latency-svc-l6v6k May 17 12:59:19.928: INFO: Got endpoints: latency-svc-l6v6k [767.443535ms] May 17 12:59:19.950: INFO: Created: latency-svc-bzsdl May 17 12:59:19.965: INFO: Got endpoints: latency-svc-bzsdl [753.916331ms] May 17 12:59:20.012: INFO: Created: latency-svc-5hchg May 17 12:59:20.025: INFO: Got endpoints: latency-svc-5hchg [771.763809ms] May 17 12:59:20.047: INFO: Created: latency-svc-7g7nh May 17 12:59:20.061: INFO: Got endpoints: latency-svc-7g7nh [716.728628ms] May 17 12:59:20.083: INFO: Created: latency-svc-4vq82 May 17 12:59:20.119: INFO: Got endpoints: latency-svc-4vq82 [744.50844ms] May 17 12:59:20.136: INFO: Created: latency-svc-7fr68 May 17 12:59:20.152: INFO: Got endpoints: latency-svc-7fr68 [699.9803ms] May 17 12:59:20.178: INFO: Created: latency-svc-jp56t May 17 12:59:20.188: INFO: Got endpoints: latency-svc-jp56t [682.31987ms] May 17 12:59:20.257: INFO: Created: latency-svc-6f5wp May 17 12:59:20.287: INFO: Got endpoints: latency-svc-6f5wp [691.622119ms] May 17 12:59:20.288: INFO: Created: latency-svc-8hlkt May 17 12:59:20.302: INFO: Got endpoints: latency-svc-8hlkt [663.994737ms] May 17 12:59:20.334: INFO: Created: latency-svc-74sn5 May 17 12:59:20.393: INFO: Got endpoints: latency-svc-74sn5 [724.333027ms] May 17 12:59:20.400: INFO: Created: latency-svc-hnf75 May 17 12:59:20.411: INFO: Got endpoints: latency-svc-hnf75 [680.891275ms] May 17 12:59:20.411: INFO: Latencies: [96.001174ms 102.048298ms 193.079392ms 282.332716ms 354.554855ms 420.597382ms 475.311718ms 583.771006ms 596.038752ms 612.440579ms 631.204256ms 633.994445ms 645.740656ms 660.932243ms 661.409769ms 663.501949ms 663.994737ms 670.266416ms 673.131346ms 680.891275ms 682.31987ms 682.883388ms 687.462066ms 687.935991ms 690.658224ms 691.622119ms 692.405448ms 694.948848ms 698.875518ms 699.9803ms 701.072388ms 705.522709ms 712.127317ms 714.211276ms 716.728628ms 724.333027ms 724.785927ms 728.762129ms 733.618284ms 733.981021ms 744.50844ms 753.786166ms 753.901174ms 753.916331ms 756.543225ms 759.918662ms 762.884443ms 765.322398ms 766.019455ms 767.443535ms 771.763809ms 772.398633ms 776.079028ms 776.442971ms 777.31214ms 777.525442ms 777.728605ms 777.837465ms 778.097418ms 778.486761ms 783.243852ms 783.808892ms 785.933808ms 791.50195ms 791.973109ms 792.619352ms 792.881259ms 795.482371ms 801.378357ms 802.050209ms 805.913746ms 807.838204ms 810.21834ms 816.306622ms 816.927376ms 817.779563ms 817.875154ms 820.270502ms 822.042432ms 824.123681ms 825.734748ms 826.423057ms 831.440333ms 831.933856ms 832.769101ms 835.577455ms 837.280756ms 837.948637ms 844.484314ms 850.67872ms 853.179901ms 855.234274ms 856.048903ms 857.15091ms 863.819289ms 865.735285ms 868.561871ms 868.836324ms 869.062814ms 869.222228ms 870.358263ms 873.411164ms 876.003949ms 877.371685ms 877.541122ms 878.686748ms 885.717364ms 886.917051ms 887.850513ms 889.398303ms 890.935434ms 892.714662ms 893.159646ms 896.391014ms 898.066504ms 899.785022ms 904.613373ms 905.053202ms 906.33297ms 906.679026ms 910.094735ms 915.545798ms 916.612513ms 916.95512ms 917.851532ms 920.75134ms 921.601788ms 924.10962ms 926.777617ms 926.857484ms 926.863717ms 927.871611ms 928.076534ms 929.30429ms 929.40243ms 938.626444ms 940.156307ms 940.64564ms 944.563833ms 945.073021ms 945.287615ms 947.296443ms 948.819978ms 949.236514ms 950.048203ms 951.114497ms 953.673644ms 954.728306ms 956.278755ms 956.435841ms 957.276884ms 961.208061ms 965.376451ms 965.454468ms 968.970564ms 970.562016ms 974.491617ms 979.268321ms 980.731929ms 982.433234ms 982.535756ms 982.628312ms 982.962266ms 983.199348ms 985.427646ms 986.446773ms 987.166254ms 988.121461ms 990.860139ms 995.436655ms 998.451316ms 1.004436041s 1.004665234s 1.005084686s 1.005475289s 1.0106852s 1.016756432s 1.018295792s 1.020896138s 1.023884527s 1.027285192s 1.03095593s 1.03408847s 1.046503494s 1.062537615s 1.143151481s 1.169063321s 1.18154559s 1.212468391s 1.214348222s 1.215172705s 1.226435298s 1.229260042s 1.235599495s 1.24519678s 1.245852559s 1.26851457s 1.268743604s 1.275269376s 1.278334284s] May 17 12:59:20.411: INFO: 50 %ile: 870.358263ms May 17 12:59:20.411: INFO: 90 %ile: 1.027285192s May 17 12:59:20.411: INFO: 99 %ile: 1.275269376s May 17 12:59:20.411: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 12:59:20.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-1271" for this suite. May 17 12:59:42.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 12:59:42.531: INFO: namespace svc-latency-1271 deletion completed in 22.113907952s • [SLOW TEST:38.231 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 12:59:42.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container May 17 12:59:47.160: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3577 pod-service-account-70c47b7c-2094-40a8-857e-b9d6c4ff1c04 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 17 12:59:49.690: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3577 pod-service-account-70c47b7c-2094-40a8-857e-b9d6c4ff1c04 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 17 12:59:49.910: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3577 pod-service-account-70c47b7c-2094-40a8-857e-b9d6c4ff1c04 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 12:59:50.121: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-3577" for this suite. May 17 12:59:56.166: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 12:59:56.293: INFO: namespace svcaccounts-3577 deletion completed in 6.142700236s • [SLOW TEST:13.762 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 12:59:56.294: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 12:59:56.373: INFO: Waiting up to 5m0s for pod "downwardapi-volume-63ce8de0-b893-4d7b-a496-6951201dbecf" in namespace "projected-6415" to be "success or failure" May 17 12:59:56.377: INFO: Pod "downwardapi-volume-63ce8de0-b893-4d7b-a496-6951201dbecf": Phase="Pending", Reason="", readiness=false. Elapsed: 3.917783ms May 17 12:59:58.380: INFO: Pod "downwardapi-volume-63ce8de0-b893-4d7b-a496-6951201dbecf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007319042s May 17 13:00:00.384: INFO: Pod "downwardapi-volume-63ce8de0-b893-4d7b-a496-6951201dbecf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.010782288s May 17 13:00:02.387: INFO: Pod "downwardapi-volume-63ce8de0-b893-4d7b-a496-6951201dbecf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.014185287s STEP: Saw pod success May 17 13:00:02.387: INFO: Pod "downwardapi-volume-63ce8de0-b893-4d7b-a496-6951201dbecf" satisfied condition "success or failure" May 17 13:00:02.390: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-63ce8de0-b893-4d7b-a496-6951201dbecf container client-container: STEP: delete the pod May 17 13:00:02.432: INFO: Waiting for pod downwardapi-volume-63ce8de0-b893-4d7b-a496-6951201dbecf to disappear May 17 13:00:02.436: INFO: Pod downwardapi-volume-63ce8de0-b893-4d7b-a496-6951201dbecf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:00:02.437: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6415" for this suite. May 17 13:00:08.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:00:08.542: INFO: namespace projected-6415 deletion completed in 6.101521515s • [SLOW TEST:12.248 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:00:08.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:00:12.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2733" for this suite. May 17 13:00:18.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:00:18.987: INFO: namespace emptydir-wrapper-2733 deletion completed in 6.108926378s • [SLOW TEST:10.445 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:00:18.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 17 13:00:23.093: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:00:23.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7630" for this suite. May 17 13:00:29.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:00:29.224: INFO: namespace container-runtime-7630 deletion completed in 6.109850832s • [SLOW TEST:10.236 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:00:29.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting the proxy server May 17 13:00:29.298: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:00:29.389: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5769" for this suite. May 17 13:00:35.424: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:00:35.507: INFO: namespace kubectl-5769 deletion completed in 6.113763501s • [SLOW TEST:6.283 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:00:35.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-62b5478b-e68b-4e75-a2fa-9e2d1952077c STEP: Creating a pod to test consume configMaps May 17 13:00:35.650: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1f21a3de-ba00-4eba-bc2c-e9c85c49061a" in namespace "projected-5380" to be "success or failure" May 17 13:00:35.653: INFO: Pod "pod-projected-configmaps-1f21a3de-ba00-4eba-bc2c-e9c85c49061a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.297875ms May 17 13:00:37.658: INFO: Pod "pod-projected-configmaps-1f21a3de-ba00-4eba-bc2c-e9c85c49061a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00835783s May 17 13:00:39.662: INFO: Pod "pod-projected-configmaps-1f21a3de-ba00-4eba-bc2c-e9c85c49061a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012220009s STEP: Saw pod success May 17 13:00:39.662: INFO: Pod "pod-projected-configmaps-1f21a3de-ba00-4eba-bc2c-e9c85c49061a" satisfied condition "success or failure" May 17 13:00:39.665: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-1f21a3de-ba00-4eba-bc2c-e9c85c49061a container projected-configmap-volume-test: STEP: delete the pod May 17 13:00:39.703: INFO: Waiting for pod pod-projected-configmaps-1f21a3de-ba00-4eba-bc2c-e9c85c49061a to disappear May 17 13:00:39.707: INFO: Pod pod-projected-configmaps-1f21a3de-ba00-4eba-bc2c-e9c85c49061a no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:00:39.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5380" for this suite. May 17 13:00:45.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:00:45.795: INFO: namespace projected-5380 deletion completed in 6.084730577s • [SLOW TEST:10.288 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:00:45.796: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-d7bc3089-246a-4297-aec9-685c626d6d95 STEP: Creating a pod to test consume secrets May 17 13:00:45.907: INFO: Waiting up to 5m0s for pod "pod-secrets-3c7ac919-9840-4944-96c7-c5416d0ed81e" in namespace "secrets-660" to be "success or failure" May 17 13:00:45.923: INFO: Pod "pod-secrets-3c7ac919-9840-4944-96c7-c5416d0ed81e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.412974ms May 17 13:00:47.926: INFO: Pod "pod-secrets-3c7ac919-9840-4944-96c7-c5416d0ed81e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019825047s May 17 13:00:49.977: INFO: Pod "pod-secrets-3c7ac919-9840-4944-96c7-c5416d0ed81e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070675356s STEP: Saw pod success May 17 13:00:49.977: INFO: Pod "pod-secrets-3c7ac919-9840-4944-96c7-c5416d0ed81e" satisfied condition "success or failure" May 17 13:00:49.980: INFO: Trying to get logs from node iruya-worker pod pod-secrets-3c7ac919-9840-4944-96c7-c5416d0ed81e container secret-volume-test: STEP: delete the pod May 17 13:00:50.011: INFO: Waiting for pod pod-secrets-3c7ac919-9840-4944-96c7-c5416d0ed81e to disappear May 17 13:00:50.015: INFO: Pod pod-secrets-3c7ac919-9840-4944-96c7-c5416d0ed81e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:00:50.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-660" for this suite. May 17 13:00:56.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:00:56.109: INFO: namespace secrets-660 deletion completed in 6.090488128s • [SLOW TEST:10.313 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:00:56.109: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 13:01:18.246: INFO: Container started at 2020-05-17 13:00:58 +0000 UTC, pod became ready at 2020-05-17 13:01:17 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:01:18.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5611" for this suite. May 17 13:01:40.265: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:01:40.348: INFO: namespace container-probe-5611 deletion completed in 22.09870376s • [SLOW TEST:44.239 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:01:40.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 17 13:01:45.464: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:01:46.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5998" for this suite. May 17 13:02:08.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:02:08.616: INFO: namespace replicaset-5998 deletion completed in 22.113301829s • [SLOW TEST:28.267 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:02:08.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-6842/configmap-test-0ea22347-8109-4cb3-a205-a7f8047fb874 STEP: Creating a pod to test consume configMaps May 17 13:02:08.702: INFO: Waiting up to 5m0s for pod "pod-configmaps-035d0d21-19e1-43a1-a7e0-9a4872e0da79" in namespace "configmap-6842" to be "success or failure" May 17 13:02:08.720: INFO: Pod "pod-configmaps-035d0d21-19e1-43a1-a7e0-9a4872e0da79": Phase="Pending", Reason="", readiness=false. Elapsed: 18.55339ms May 17 13:02:10.793: INFO: Pod "pod-configmaps-035d0d21-19e1-43a1-a7e0-9a4872e0da79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091826425s May 17 13:02:12.798: INFO: Pod "pod-configmaps-035d0d21-19e1-43a1-a7e0-9a4872e0da79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.096690407s STEP: Saw pod success May 17 13:02:12.798: INFO: Pod "pod-configmaps-035d0d21-19e1-43a1-a7e0-9a4872e0da79" satisfied condition "success or failure" May 17 13:02:12.802: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-035d0d21-19e1-43a1-a7e0-9a4872e0da79 container env-test: STEP: delete the pod May 17 13:02:12.818: INFO: Waiting for pod pod-configmaps-035d0d21-19e1-43a1-a7e0-9a4872e0da79 to disappear May 17 13:02:12.822: INFO: Pod pod-configmaps-035d0d21-19e1-43a1-a7e0-9a4872e0da79 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:02:12.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6842" for this suite. May 17 13:02:18.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:02:18.907: INFO: namespace configmap-6842 deletion completed in 6.082773688s • [SLOW TEST:10.291 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:02:18.908: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0517 13:02:59.617609 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 17 13:02:59.617: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:02:59.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5034" for this suite. May 17 13:03:11.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:03:11.719: INFO: namespace gc-5034 deletion completed in 12.098980471s • [SLOW TEST:52.811 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:03:11.719: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 13:03:11.817: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b234366c-3f0b-4112-a92e-698ea909c726" in namespace "downward-api-4676" to be "success or failure" May 17 13:03:11.821: INFO: Pod "downwardapi-volume-b234366c-3f0b-4112-a92e-698ea909c726": Phase="Pending", Reason="", readiness=false. Elapsed: 3.868208ms May 17 13:03:13.825: INFO: Pod "downwardapi-volume-b234366c-3f0b-4112-a92e-698ea909c726": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007893426s May 17 13:03:15.829: INFO: Pod "downwardapi-volume-b234366c-3f0b-4112-a92e-698ea909c726": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01195912s STEP: Saw pod success May 17 13:03:15.830: INFO: Pod "downwardapi-volume-b234366c-3f0b-4112-a92e-698ea909c726" satisfied condition "success or failure" May 17 13:03:15.832: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b234366c-3f0b-4112-a92e-698ea909c726 container client-container: STEP: delete the pod May 17 13:03:15.881: INFO: Waiting for pod downwardapi-volume-b234366c-3f0b-4112-a92e-698ea909c726 to disappear May 17 13:03:15.887: INFO: Pod downwardapi-volume-b234366c-3f0b-4112-a92e-698ea909c726 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:03:15.887: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4676" for this suite. May 17 13:03:21.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:03:21.984: INFO: namespace downward-api-4676 deletion completed in 6.093588489s • [SLOW TEST:10.265 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:03:21.984: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token May 17 13:03:22.612: INFO: created pod pod-service-account-defaultsa May 17 13:03:22.612: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 17 13:03:22.621: INFO: created pod pod-service-account-mountsa May 17 13:03:22.621: INFO: pod pod-service-account-mountsa service account token volume mount: true May 17 13:03:22.632: INFO: created pod pod-service-account-nomountsa May 17 13:03:22.632: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 17 13:03:22.697: INFO: created pod pod-service-account-defaultsa-mountspec May 17 13:03:22.697: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 17 13:03:22.758: INFO: created pod pod-service-account-mountsa-mountspec May 17 13:03:22.758: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 17 13:03:22.763: INFO: created pod pod-service-account-nomountsa-mountspec May 17 13:03:22.763: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 17 13:03:22.806: INFO: created pod pod-service-account-defaultsa-nomountspec May 17 13:03:22.806: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 17 13:03:22.858: INFO: created pod pod-service-account-mountsa-nomountspec May 17 13:03:22.858: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 17 13:03:22.924: INFO: created pod pod-service-account-nomountsa-nomountspec May 17 13:03:22.924: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:03:22.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4579" for this suite. May 17 13:03:51.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:03:51.142: INFO: namespace svcaccounts-4579 deletion completed in 28.167904235s • [SLOW TEST:29.158 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:03:51.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-8250 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-8250 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8250 May 17 13:03:51.249: INFO: Found 0 stateful pods, waiting for 1 May 17 13:04:01.255: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 17 13:04:01.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8250 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 17 13:04:01.540: INFO: stderr: "I0517 13:04:01.379724 131 log.go:172] (0xc0009fc420) (0xc00033e6e0) Create stream\nI0517 13:04:01.379783 131 log.go:172] (0xc0009fc420) (0xc00033e6e0) Stream added, broadcasting: 1\nI0517 13:04:01.390685 131 log.go:172] (0xc0009fc420) Reply frame received for 1\nI0517 13:04:01.390754 131 log.go:172] (0xc0009fc420) (0xc0009a6000) Create stream\nI0517 13:04:01.390776 131 log.go:172] (0xc0009fc420) (0xc0009a6000) Stream added, broadcasting: 3\nI0517 13:04:01.395191 131 log.go:172] (0xc0009fc420) Reply frame received for 3\nI0517 13:04:01.395227 131 log.go:172] (0xc0009fc420) (0xc00033e780) Create stream\nI0517 13:04:01.395235 131 log.go:172] (0xc0009fc420) (0xc00033e780) Stream added, broadcasting: 5\nI0517 13:04:01.395941 131 log.go:172] (0xc0009fc420) Reply frame received for 5\nI0517 13:04:01.479167 131 log.go:172] (0xc0009fc420) Data frame received for 5\nI0517 13:04:01.479205 131 log.go:172] (0xc00033e780) (5) Data frame handling\nI0517 13:04:01.479227 131 log.go:172] (0xc00033e780) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0517 13:04:01.529937 131 log.go:172] (0xc0009fc420) Data frame received for 3\nI0517 13:04:01.529975 131 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0517 13:04:01.529988 131 log.go:172] (0xc0009a6000) (3) Data frame sent\nI0517 13:04:01.529999 131 log.go:172] (0xc0009fc420) Data frame received for 3\nI0517 13:04:01.530015 131 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0517 13:04:01.532175 131 log.go:172] (0xc0009fc420) Data frame received for 5\nI0517 13:04:01.532203 131 log.go:172] (0xc00033e780) (5) Data frame handling\nI0517 13:04:01.533603 131 log.go:172] (0xc0009fc420) Data frame received for 1\nI0517 13:04:01.533629 131 log.go:172] (0xc00033e6e0) (1) Data frame handling\nI0517 13:04:01.533647 131 log.go:172] (0xc00033e6e0) (1) Data frame sent\nI0517 13:04:01.533926 131 log.go:172] (0xc0009fc420) (0xc00033e6e0) Stream removed, broadcasting: 1\nI0517 13:04:01.534195 131 log.go:172] (0xc0009fc420) Go away received\nI0517 13:04:01.534531 131 log.go:172] (0xc0009fc420) (0xc00033e6e0) Stream removed, broadcasting: 1\nI0517 13:04:01.534566 131 log.go:172] (0xc0009fc420) (0xc0009a6000) Stream removed, broadcasting: 3\nI0517 13:04:01.534592 131 log.go:172] (0xc0009fc420) (0xc00033e780) Stream removed, broadcasting: 5\n" May 17 13:04:01.540: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 17 13:04:01.540: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 17 13:04:01.544: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 17 13:04:11.549: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 17 13:04:11.549: INFO: Waiting for statefulset status.replicas updated to 0 May 17 13:04:11.567: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999543s May 17 13:04:12.579: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.992822547s May 17 13:04:13.584: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.981194171s May 17 13:04:14.588: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.976477875s May 17 13:04:15.592: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.972674152s May 17 13:04:16.596: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.968585138s May 17 13:04:17.601: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.963787624s May 17 13:04:18.605: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.959335002s May 17 13:04:19.613: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.954914922s May 17 13:04:20.618: INFO: Verifying statefulset ss doesn't scale past 1 for another 946.829412ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8250 May 17 13:04:21.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8250 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 17 13:04:21.852: INFO: stderr: "I0517 13:04:21.760035 152 log.go:172] (0xc0008fa2c0) (0xc00096c640) Create stream\nI0517 13:04:21.760108 152 log.go:172] (0xc0008fa2c0) (0xc00096c640) Stream added, broadcasting: 1\nI0517 13:04:21.761980 152 log.go:172] (0xc0008fa2c0) Reply frame received for 1\nI0517 13:04:21.762029 152 log.go:172] (0xc0008fa2c0) (0xc0008c2000) Create stream\nI0517 13:04:21.762045 152 log.go:172] (0xc0008fa2c0) (0xc0008c2000) Stream added, broadcasting: 3\nI0517 13:04:21.763008 152 log.go:172] (0xc0008fa2c0) Reply frame received for 3\nI0517 13:04:21.763057 152 log.go:172] (0xc0008fa2c0) (0xc00096c6e0) Create stream\nI0517 13:04:21.763089 152 log.go:172] (0xc0008fa2c0) (0xc00096c6e0) Stream added, broadcasting: 5\nI0517 13:04:21.763878 152 log.go:172] (0xc0008fa2c0) Reply frame received for 5\nI0517 13:04:21.845443 152 log.go:172] (0xc0008fa2c0) Data frame received for 5\nI0517 13:04:21.845476 152 log.go:172] (0xc00096c6e0) (5) Data frame handling\nI0517 13:04:21.845499 152 log.go:172] (0xc0008fa2c0) Data frame received for 3\nI0517 13:04:21.845530 152 log.go:172] (0xc0008c2000) (3) Data frame handling\nI0517 13:04:21.845540 152 log.go:172] (0xc0008c2000) (3) Data frame sent\nI0517 13:04:21.845545 152 log.go:172] (0xc0008fa2c0) Data frame received for 3\nI0517 13:04:21.845551 152 log.go:172] (0xc0008c2000) (3) Data frame handling\nI0517 13:04:21.845584 152 log.go:172] (0xc00096c6e0) (5) Data frame sent\nI0517 13:04:21.845590 152 log.go:172] (0xc0008fa2c0) Data frame received for 5\nI0517 13:04:21.845594 152 log.go:172] (0xc00096c6e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0517 13:04:21.846666 152 log.go:172] (0xc0008fa2c0) Data frame received for 1\nI0517 13:04:21.846699 152 log.go:172] (0xc00096c640) (1) Data frame handling\nI0517 13:04:21.846719 152 log.go:172] (0xc00096c640) (1) Data frame sent\nI0517 13:04:21.846739 152 log.go:172] (0xc0008fa2c0) (0xc00096c640) Stream removed, broadcasting: 1\nI0517 13:04:21.846779 152 log.go:172] (0xc0008fa2c0) Go away received\nI0517 13:04:21.847441 152 log.go:172] (0xc0008fa2c0) (0xc00096c640) Stream removed, broadcasting: 1\nI0517 13:04:21.847464 152 log.go:172] (0xc0008fa2c0) (0xc0008c2000) Stream removed, broadcasting: 3\nI0517 13:04:21.847474 152 log.go:172] (0xc0008fa2c0) (0xc00096c6e0) Stream removed, broadcasting: 5\n" May 17 13:04:21.852: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 17 13:04:21.852: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 17 13:04:21.856: INFO: Found 1 stateful pods, waiting for 3 May 17 13:04:31.862: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 17 13:04:31.862: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 17 13:04:31.862: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 17 13:04:31.868: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8250 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 17 13:04:32.129: INFO: stderr: "I0517 13:04:32.030579 173 log.go:172] (0xc0009a8420) (0xc0006c2a00) Create stream\nI0517 13:04:32.030658 173 log.go:172] (0xc0009a8420) (0xc0006c2a00) Stream added, broadcasting: 1\nI0517 13:04:32.033837 173 log.go:172] (0xc0009a8420) Reply frame received for 1\nI0517 13:04:32.033891 173 log.go:172] (0xc0009a8420) (0xc0008f4000) Create stream\nI0517 13:04:32.033913 173 log.go:172] (0xc0009a8420) (0xc0008f4000) Stream added, broadcasting: 3\nI0517 13:04:32.035233 173 log.go:172] (0xc0009a8420) Reply frame received for 3\nI0517 13:04:32.035282 173 log.go:172] (0xc0009a8420) (0xc00081c000) Create stream\nI0517 13:04:32.035308 173 log.go:172] (0xc0009a8420) (0xc00081c000) Stream added, broadcasting: 5\nI0517 13:04:32.036171 173 log.go:172] (0xc0009a8420) Reply frame received for 5\nI0517 13:04:32.122346 173 log.go:172] (0xc0009a8420) Data frame received for 5\nI0517 13:04:32.122377 173 log.go:172] (0xc00081c000) (5) Data frame handling\nI0517 13:04:32.122389 173 log.go:172] (0xc00081c000) (5) Data frame sent\nI0517 13:04:32.122398 173 log.go:172] (0xc0009a8420) Data frame received for 5\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0517 13:04:32.122436 173 log.go:172] (0xc0009a8420) Data frame received for 3\nI0517 13:04:32.122480 173 log.go:172] (0xc0008f4000) (3) Data frame handling\nI0517 13:04:32.122495 173 log.go:172] (0xc0008f4000) (3) Data frame sent\nI0517 13:04:32.122510 173 log.go:172] (0xc0009a8420) Data frame received for 3\nI0517 13:04:32.122520 173 log.go:172] (0xc0008f4000) (3) Data frame handling\nI0517 13:04:32.122557 173 log.go:172] (0xc00081c000) (5) Data frame handling\nI0517 13:04:32.124262 173 log.go:172] (0xc0009a8420) Data frame received for 1\nI0517 13:04:32.124286 173 log.go:172] (0xc0006c2a00) (1) Data frame handling\nI0517 13:04:32.124314 173 log.go:172] (0xc0006c2a00) (1) Data frame sent\nI0517 13:04:32.124351 173 log.go:172] (0xc0009a8420) (0xc0006c2a00) Stream removed, broadcasting: 1\nI0517 13:04:32.124442 173 log.go:172] (0xc0009a8420) Go away received\nI0517 13:04:32.124806 173 log.go:172] (0xc0009a8420) (0xc0006c2a00) Stream removed, broadcasting: 1\nI0517 13:04:32.124825 173 log.go:172] (0xc0009a8420) (0xc0008f4000) Stream removed, broadcasting: 3\nI0517 13:04:32.124836 173 log.go:172] (0xc0009a8420) (0xc00081c000) Stream removed, broadcasting: 5\n" May 17 13:04:32.130: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 17 13:04:32.130: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 17 13:04:32.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8250 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 17 13:04:32.392: INFO: stderr: "I0517 13:04:32.247923 194 log.go:172] (0xc0008cc0b0) (0xc00087e5a0) Create stream\nI0517 13:04:32.247985 194 log.go:172] (0xc0008cc0b0) (0xc00087e5a0) Stream added, broadcasting: 1\nI0517 13:04:32.250409 194 log.go:172] (0xc0008cc0b0) Reply frame received for 1\nI0517 13:04:32.250510 194 log.go:172] (0xc0008cc0b0) (0xc00085e000) Create stream\nI0517 13:04:32.250520 194 log.go:172] (0xc0008cc0b0) (0xc00085e000) Stream added, broadcasting: 3\nI0517 13:04:32.251310 194 log.go:172] (0xc0008cc0b0) Reply frame received for 3\nI0517 13:04:32.251346 194 log.go:172] (0xc0008cc0b0) (0xc00085e0a0) Create stream\nI0517 13:04:32.251357 194 log.go:172] (0xc0008cc0b0) (0xc00085e0a0) Stream added, broadcasting: 5\nI0517 13:04:32.252142 194 log.go:172] (0xc0008cc0b0) Reply frame received for 5\nI0517 13:04:32.317731 194 log.go:172] (0xc0008cc0b0) Data frame received for 5\nI0517 13:04:32.317766 194 log.go:172] (0xc00085e0a0) (5) Data frame handling\nI0517 13:04:32.317789 194 log.go:172] (0xc00085e0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0517 13:04:32.383030 194 log.go:172] (0xc0008cc0b0) Data frame received for 5\nI0517 13:04:32.383059 194 log.go:172] (0xc00085e0a0) (5) Data frame handling\nI0517 13:04:32.383110 194 log.go:172] (0xc0008cc0b0) Data frame received for 3\nI0517 13:04:32.383136 194 log.go:172] (0xc00085e000) (3) Data frame handling\nI0517 13:04:32.383165 194 log.go:172] (0xc00085e000) (3) Data frame sent\nI0517 13:04:32.383176 194 log.go:172] (0xc0008cc0b0) Data frame received for 3\nI0517 13:04:32.383184 194 log.go:172] (0xc00085e000) (3) Data frame handling\nI0517 13:04:32.386249 194 log.go:172] (0xc0008cc0b0) Data frame received for 1\nI0517 13:04:32.386269 194 log.go:172] (0xc00087e5a0) (1) Data frame handling\nI0517 13:04:32.386284 194 log.go:172] (0xc00087e5a0) (1) Data frame sent\nI0517 13:04:32.386439 194 log.go:172] (0xc0008cc0b0) (0xc00087e5a0) Stream removed, broadcasting: 1\nI0517 13:04:32.386763 194 log.go:172] (0xc0008cc0b0) (0xc00087e5a0) Stream removed, broadcasting: 1\nI0517 13:04:32.386844 194 log.go:172] (0xc0008cc0b0) (0xc00085e000) Stream removed, broadcasting: 3\nI0517 13:04:32.386957 194 log.go:172] (0xc0008cc0b0) Go away received\nI0517 13:04:32.387296 194 log.go:172] (0xc0008cc0b0) (0xc00085e0a0) Stream removed, broadcasting: 5\n" May 17 13:04:32.392: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 17 13:04:32.392: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 17 13:04:32.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8250 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 17 13:04:32.645: INFO: stderr: "I0517 13:04:32.520002 211 log.go:172] (0xc000a12630) (0xc00052ab40) Create stream\nI0517 13:04:32.520053 211 log.go:172] (0xc000a12630) (0xc00052ab40) Stream added, broadcasting: 1\nI0517 13:04:32.523202 211 log.go:172] (0xc000a12630) Reply frame received for 1\nI0517 13:04:32.523259 211 log.go:172] (0xc000a12630) (0xc000a06000) Create stream\nI0517 13:04:32.523277 211 log.go:172] (0xc000a12630) (0xc000a06000) Stream added, broadcasting: 3\nI0517 13:04:32.524729 211 log.go:172] (0xc000a12630) Reply frame received for 3\nI0517 13:04:32.524768 211 log.go:172] (0xc000a12630) (0xc000a060a0) Create stream\nI0517 13:04:32.524779 211 log.go:172] (0xc000a12630) (0xc000a060a0) Stream added, broadcasting: 5\nI0517 13:04:32.525931 211 log.go:172] (0xc000a12630) Reply frame received for 5\nI0517 13:04:32.589056 211 log.go:172] (0xc000a12630) Data frame received for 5\nI0517 13:04:32.589098 211 log.go:172] (0xc000a060a0) (5) Data frame handling\nI0517 13:04:32.589357 211 log.go:172] (0xc000a060a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0517 13:04:32.637803 211 log.go:172] (0xc000a12630) Data frame received for 3\nI0517 13:04:32.637906 211 log.go:172] (0xc000a06000) (3) Data frame handling\nI0517 13:04:32.637926 211 log.go:172] (0xc000a06000) (3) Data frame sent\nI0517 13:04:32.637938 211 log.go:172] (0xc000a12630) Data frame received for 3\nI0517 13:04:32.637949 211 log.go:172] (0xc000a06000) (3) Data frame handling\nI0517 13:04:32.637984 211 log.go:172] (0xc000a12630) Data frame received for 5\nI0517 13:04:32.638003 211 log.go:172] (0xc000a060a0) (5) Data frame handling\nI0517 13:04:32.639602 211 log.go:172] (0xc000a12630) Data frame received for 1\nI0517 13:04:32.639622 211 log.go:172] (0xc00052ab40) (1) Data frame handling\nI0517 13:04:32.639634 211 log.go:172] (0xc00052ab40) (1) Data frame sent\nI0517 13:04:32.639650 211 log.go:172] (0xc000a12630) (0xc00052ab40) Stream removed, broadcasting: 1\nI0517 13:04:32.639666 211 log.go:172] (0xc000a12630) Go away received\nI0517 13:04:32.639925 211 log.go:172] (0xc000a12630) (0xc00052ab40) Stream removed, broadcasting: 1\nI0517 13:04:32.639943 211 log.go:172] (0xc000a12630) (0xc000a06000) Stream removed, broadcasting: 3\nI0517 13:04:32.639949 211 log.go:172] (0xc000a12630) (0xc000a060a0) Stream removed, broadcasting: 5\n" May 17 13:04:32.645: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 17 13:04:32.645: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 17 13:04:32.645: INFO: Waiting for statefulset status.replicas updated to 0 May 17 13:04:32.648: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 17 13:04:42.659: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 17 13:04:42.659: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 17 13:04:42.659: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 17 13:04:42.670: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999688s May 17 13:04:43.675: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993909317s May 17 13:04:44.680: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988613482s May 17 13:04:45.686: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.98414593s May 17 13:04:46.690: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.9777278s May 17 13:04:47.695: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.974247821s May 17 13:04:48.700: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.968769538s May 17 13:04:49.706: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.963521197s May 17 13:04:50.754: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.957736715s May 17 13:04:51.759: INFO: Verifying statefulset ss doesn't scale past 3 for another 909.997264ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8250 May 17 13:04:52.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8250 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 17 13:04:53.011: INFO: stderr: "I0517 13:04:52.902416 232 log.go:172] (0xc000340370) (0xc00003b400) Create stream\nI0517 13:04:52.902473 232 log.go:172] (0xc000340370) (0xc00003b400) Stream added, broadcasting: 1\nI0517 13:04:52.904704 232 log.go:172] (0xc000340370) Reply frame received for 1\nI0517 13:04:52.904732 232 log.go:172] (0xc000340370) (0xc000534000) Create stream\nI0517 13:04:52.904743 232 log.go:172] (0xc000340370) (0xc000534000) Stream added, broadcasting: 3\nI0517 13:04:52.905988 232 log.go:172] (0xc000340370) Reply frame received for 3\nI0517 13:04:52.906056 232 log.go:172] (0xc000340370) (0xc0005340a0) Create stream\nI0517 13:04:52.906082 232 log.go:172] (0xc000340370) (0xc0005340a0) Stream added, broadcasting: 5\nI0517 13:04:52.907027 232 log.go:172] (0xc000340370) Reply frame received for 5\nI0517 13:04:53.005731 232 log.go:172] (0xc000340370) Data frame received for 3\nI0517 13:04:53.005775 232 log.go:172] (0xc000534000) (3) Data frame handling\nI0517 13:04:53.005801 232 log.go:172] (0xc000534000) (3) Data frame sent\nI0517 13:04:53.005964 232 log.go:172] (0xc000340370) Data frame received for 5\nI0517 13:04:53.006017 232 log.go:172] (0xc0005340a0) (5) Data frame handling\nI0517 13:04:53.006043 232 log.go:172] (0xc0005340a0) (5) Data frame sent\nI0517 13:04:53.006059 232 log.go:172] (0xc000340370) Data frame received for 5\nI0517 13:04:53.006081 232 log.go:172] (0xc0005340a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0517 13:04:53.006126 232 log.go:172] (0xc000340370) Data frame received for 3\nI0517 13:04:53.006171 232 log.go:172] (0xc000534000) (3) Data frame handling\nI0517 13:04:53.007018 232 log.go:172] (0xc000340370) Data frame received for 1\nI0517 13:04:53.007045 232 log.go:172] (0xc00003b400) (1) Data frame handling\nI0517 13:04:53.007069 232 log.go:172] (0xc00003b400) (1) Data frame sent\nI0517 13:04:53.007145 232 log.go:172] (0xc000340370) (0xc00003b400) Stream removed, broadcasting: 1\nI0517 13:04:53.007212 232 log.go:172] (0xc000340370) Go away received\nI0517 13:04:53.007398 232 log.go:172] (0xc000340370) (0xc00003b400) Stream removed, broadcasting: 1\nI0517 13:04:53.007412 232 log.go:172] (0xc000340370) (0xc000534000) Stream removed, broadcasting: 3\nI0517 13:04:53.007418 232 log.go:172] (0xc000340370) (0xc0005340a0) Stream removed, broadcasting: 5\n" May 17 13:04:53.011: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 17 13:04:53.011: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 17 13:04:53.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8250 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 17 13:04:53.201: INFO: stderr: "I0517 13:04:53.129672 251 log.go:172] (0xc000116580) (0xc000522820) Create stream\nI0517 13:04:53.129724 251 log.go:172] (0xc000116580) (0xc000522820) Stream added, broadcasting: 1\nI0517 13:04:53.131568 251 log.go:172] (0xc000116580) Reply frame received for 1\nI0517 13:04:53.131609 251 log.go:172] (0xc000116580) (0xc000796000) Create stream\nI0517 13:04:53.131628 251 log.go:172] (0xc000116580) (0xc000796000) Stream added, broadcasting: 3\nI0517 13:04:53.132207 251 log.go:172] (0xc000116580) Reply frame received for 3\nI0517 13:04:53.132233 251 log.go:172] (0xc000116580) (0xc0005228c0) Create stream\nI0517 13:04:53.132246 251 log.go:172] (0xc000116580) (0xc0005228c0) Stream added, broadcasting: 5\nI0517 13:04:53.132916 251 log.go:172] (0xc000116580) Reply frame received for 5\nI0517 13:04:53.193714 251 log.go:172] (0xc000116580) Data frame received for 5\nI0517 13:04:53.193762 251 log.go:172] (0xc0005228c0) (5) Data frame handling\nI0517 13:04:53.193780 251 log.go:172] (0xc0005228c0) (5) Data frame sent\nI0517 13:04:53.193794 251 log.go:172] (0xc000116580) Data frame received for 5\nI0517 13:04:53.193805 251 log.go:172] (0xc0005228c0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0517 13:04:53.193849 251 log.go:172] (0xc000116580) Data frame received for 3\nI0517 13:04:53.193891 251 log.go:172] (0xc000796000) (3) Data frame handling\nI0517 13:04:53.193914 251 log.go:172] (0xc000796000) (3) Data frame sent\nI0517 13:04:53.193936 251 log.go:172] (0xc000116580) Data frame received for 3\nI0517 13:04:53.193951 251 log.go:172] (0xc000796000) (3) Data frame handling\nI0517 13:04:53.195381 251 log.go:172] (0xc000116580) Data frame received for 1\nI0517 13:04:53.195404 251 log.go:172] (0xc000522820) (1) Data frame handling\nI0517 13:04:53.195415 251 log.go:172] (0xc000522820) (1) Data frame sent\nI0517 13:04:53.195428 251 log.go:172] (0xc000116580) (0xc000522820) Stream removed, broadcasting: 1\nI0517 13:04:53.195443 251 log.go:172] (0xc000116580) Go away received\nI0517 13:04:53.195787 251 log.go:172] (0xc000116580) (0xc000522820) Stream removed, broadcasting: 1\nI0517 13:04:53.195806 251 log.go:172] (0xc000116580) (0xc000796000) Stream removed, broadcasting: 3\nI0517 13:04:53.195814 251 log.go:172] (0xc000116580) (0xc0005228c0) Stream removed, broadcasting: 5\n" May 17 13:04:53.201: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 17 13:04:53.201: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 17 13:04:53.201: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8250 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 17 13:04:53.408: INFO: stderr: "I0517 13:04:53.344105 270 log.go:172] (0xc0008ee420) (0xc000664820) Create stream\nI0517 13:04:53.344156 270 log.go:172] (0xc0008ee420) (0xc000664820) Stream added, broadcasting: 1\nI0517 13:04:53.346649 270 log.go:172] (0xc0008ee420) Reply frame received for 1\nI0517 13:04:53.346683 270 log.go:172] (0xc0008ee420) (0xc000998000) Create stream\nI0517 13:04:53.346693 270 log.go:172] (0xc0008ee420) (0xc000998000) Stream added, broadcasting: 3\nI0517 13:04:53.347573 270 log.go:172] (0xc0008ee420) Reply frame received for 3\nI0517 13:04:53.347616 270 log.go:172] (0xc0008ee420) (0xc0006648c0) Create stream\nI0517 13:04:53.347631 270 log.go:172] (0xc0008ee420) (0xc0006648c0) Stream added, broadcasting: 5\nI0517 13:04:53.348528 270 log.go:172] (0xc0008ee420) Reply frame received for 5\nI0517 13:04:53.400864 270 log.go:172] (0xc0008ee420) Data frame received for 3\nI0517 13:04:53.400891 270 log.go:172] (0xc000998000) (3) Data frame handling\nI0517 13:04:53.400899 270 log.go:172] (0xc000998000) (3) Data frame sent\nI0517 13:04:53.400968 270 log.go:172] (0xc0008ee420) Data frame received for 5\nI0517 13:04:53.400995 270 log.go:172] (0xc0006648c0) (5) Data frame handling\nI0517 13:04:53.401014 270 log.go:172] (0xc0006648c0) (5) Data frame sent\nI0517 13:04:53.401060 270 log.go:172] (0xc0008ee420) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0517 13:04:53.401092 270 log.go:172] (0xc0006648c0) (5) Data frame handling\nI0517 13:04:53.401340 270 log.go:172] (0xc0008ee420) Data frame received for 3\nI0517 13:04:53.401374 270 log.go:172] (0xc000998000) (3) Data frame handling\nI0517 13:04:53.402933 270 log.go:172] (0xc0008ee420) Data frame received for 1\nI0517 13:04:53.402955 270 log.go:172] (0xc000664820) (1) Data frame handling\nI0517 13:04:53.402981 270 log.go:172] (0xc000664820) (1) Data frame sent\nI0517 13:04:53.403005 270 log.go:172] (0xc0008ee420) (0xc000664820) Stream removed, broadcasting: 1\nI0517 13:04:53.403021 270 log.go:172] (0xc0008ee420) Go away received\nI0517 13:04:53.403463 270 log.go:172] (0xc0008ee420) (0xc000664820) Stream removed, broadcasting: 1\nI0517 13:04:53.403491 270 log.go:172] (0xc0008ee420) (0xc000998000) Stream removed, broadcasting: 3\nI0517 13:04:53.403501 270 log.go:172] (0xc0008ee420) (0xc0006648c0) Stream removed, broadcasting: 5\n" May 17 13:04:53.408: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 17 13:04:53.408: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 17 13:04:53.408: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 17 13:05:13.423: INFO: Deleting all statefulset in ns statefulset-8250 May 17 13:05:13.425: INFO: Scaling statefulset ss to 0 May 17 13:05:13.434: INFO: Waiting for statefulset status.replicas updated to 0 May 17 13:05:13.436: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:05:13.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8250" for this suite. May 17 13:05:19.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:05:19.535: INFO: namespace statefulset-8250 deletion completed in 6.081911582s • [SLOW TEST:88.393 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:05:19.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 17 13:05:19.616: INFO: Pod name pod-release: Found 0 pods out of 1 May 17 13:05:24.620: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:05:25.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-8371" for this suite. May 17 13:05:31.781: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:05:31.927: INFO: namespace replication-controller-8371 deletion completed in 6.268348584s • [SLOW TEST:12.392 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:05:31.928: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4835 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 17 13:05:32.065: INFO: Found 0 stateful pods, waiting for 3 May 17 13:05:42.069: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 17 13:05:42.069: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 17 13:05:42.069: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 17 13:05:52.069: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 17 13:05:52.069: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 17 13:05:52.069: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 17 13:05:52.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4835 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 17 13:05:52.336: INFO: stderr: "I0517 13:05:52.216133 289 log.go:172] (0xc0009860b0) (0xc000a065a0) Create stream\nI0517 13:05:52.216310 289 log.go:172] (0xc0009860b0) (0xc000a065a0) Stream added, broadcasting: 1\nI0517 13:05:52.218404 289 log.go:172] (0xc0009860b0) Reply frame received for 1\nI0517 13:05:52.218452 289 log.go:172] (0xc0009860b0) (0xc0000d4280) Create stream\nI0517 13:05:52.218469 289 log.go:172] (0xc0009860b0) (0xc0000d4280) Stream added, broadcasting: 3\nI0517 13:05:52.219251 289 log.go:172] (0xc0009860b0) Reply frame received for 3\nI0517 13:05:52.219278 289 log.go:172] (0xc0009860b0) (0xc000a066e0) Create stream\nI0517 13:05:52.219286 289 log.go:172] (0xc0009860b0) (0xc000a066e0) Stream added, broadcasting: 5\nI0517 13:05:52.220009 289 log.go:172] (0xc0009860b0) Reply frame received for 5\nI0517 13:05:52.301983 289 log.go:172] (0xc0009860b0) Data frame received for 5\nI0517 13:05:52.302013 289 log.go:172] (0xc000a066e0) (5) Data frame handling\nI0517 13:05:52.302032 289 log.go:172] (0xc000a066e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0517 13:05:52.329836 289 log.go:172] (0xc0009860b0) Data frame received for 5\nI0517 13:05:52.329857 289 log.go:172] (0xc000a066e0) (5) Data frame handling\nI0517 13:05:52.329881 289 log.go:172] (0xc0009860b0) Data frame received for 3\nI0517 13:05:52.329894 289 log.go:172] (0xc0000d4280) (3) Data frame handling\nI0517 13:05:52.329906 289 log.go:172] (0xc0000d4280) (3) Data frame sent\nI0517 13:05:52.329914 289 log.go:172] (0xc0009860b0) Data frame received for 3\nI0517 13:05:52.329921 289 log.go:172] (0xc0000d4280) (3) Data frame handling\nI0517 13:05:52.331784 289 log.go:172] (0xc0009860b0) Data frame received for 1\nI0517 13:05:52.331804 289 log.go:172] (0xc000a065a0) (1) Data frame handling\nI0517 13:05:52.331818 289 log.go:172] (0xc000a065a0) (1) Data frame sent\nI0517 13:05:52.331834 289 log.go:172] (0xc0009860b0) (0xc000a065a0) Stream removed, broadcasting: 1\nI0517 13:05:52.331951 289 log.go:172] (0xc0009860b0) Go away received\nI0517 13:05:52.332130 289 log.go:172] (0xc0009860b0) (0xc000a065a0) Stream removed, broadcasting: 1\nI0517 13:05:52.332142 289 log.go:172] (0xc0009860b0) (0xc0000d4280) Stream removed, broadcasting: 3\nI0517 13:05:52.332148 289 log.go:172] (0xc0009860b0) (0xc000a066e0) Stream removed, broadcasting: 5\n" May 17 13:05:52.337: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 17 13:05:52.337: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 17 13:06:02.387: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 17 13:06:12.413: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4835 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 17 13:06:12.663: INFO: stderr: "I0517 13:06:12.557827 308 log.go:172] (0xc00094e420) (0xc0003fe820) Create stream\nI0517 13:06:12.557878 308 log.go:172] (0xc00094e420) (0xc0003fe820) Stream added, broadcasting: 1\nI0517 13:06:12.562452 308 log.go:172] (0xc00094e420) Reply frame received for 1\nI0517 13:06:12.562511 308 log.go:172] (0xc00094e420) (0xc000660320) Create stream\nI0517 13:06:12.562536 308 log.go:172] (0xc00094e420) (0xc000660320) Stream added, broadcasting: 3\nI0517 13:06:12.563670 308 log.go:172] (0xc00094e420) Reply frame received for 3\nI0517 13:06:12.563717 308 log.go:172] (0xc00094e420) (0xc0003fe000) Create stream\nI0517 13:06:12.563729 308 log.go:172] (0xc00094e420) (0xc0003fe000) Stream added, broadcasting: 5\nI0517 13:06:12.564587 308 log.go:172] (0xc00094e420) Reply frame received for 5\nI0517 13:06:12.655003 308 log.go:172] (0xc00094e420) Data frame received for 3\nI0517 13:06:12.655042 308 log.go:172] (0xc000660320) (3) Data frame handling\nI0517 13:06:12.655059 308 log.go:172] (0xc000660320) (3) Data frame sent\nI0517 13:06:12.655066 308 log.go:172] (0xc00094e420) Data frame received for 3\nI0517 13:06:12.655071 308 log.go:172] (0xc000660320) (3) Data frame handling\nI0517 13:06:12.655338 308 log.go:172] (0xc00094e420) Data frame received for 5\nI0517 13:06:12.655357 308 log.go:172] (0xc0003fe000) (5) Data frame handling\nI0517 13:06:12.655375 308 log.go:172] (0xc0003fe000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0517 13:06:12.655419 308 log.go:172] (0xc00094e420) Data frame received for 5\nI0517 13:06:12.655442 308 log.go:172] (0xc0003fe000) (5) Data frame handling\nI0517 13:06:12.657051 308 log.go:172] (0xc00094e420) Data frame received for 1\nI0517 13:06:12.657072 308 log.go:172] (0xc0003fe820) (1) Data frame handling\nI0517 13:06:12.657087 308 log.go:172] (0xc0003fe820) (1) Data frame sent\nI0517 13:06:12.657096 308 log.go:172] (0xc00094e420) (0xc0003fe820) Stream removed, broadcasting: 1\nI0517 13:06:12.657310 308 log.go:172] (0xc00094e420) Go away received\nI0517 13:06:12.657860 308 log.go:172] (0xc00094e420) (0xc0003fe820) Stream removed, broadcasting: 1\nI0517 13:06:12.657882 308 log.go:172] (0xc00094e420) (0xc000660320) Stream removed, broadcasting: 3\nI0517 13:06:12.657893 308 log.go:172] (0xc00094e420) (0xc0003fe000) Stream removed, broadcasting: 5\n" May 17 13:06:12.664: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 17 13:06:12.664: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' STEP: Rolling back to a previous revision May 17 13:06:32.689: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4835 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 17 13:06:32.954: INFO: stderr: "I0517 13:06:32.816380 330 log.go:172] (0xc0009b6420) (0xc0007da6e0) Create stream\nI0517 13:06:32.816439 330 log.go:172] (0xc0009b6420) (0xc0007da6e0) Stream added, broadcasting: 1\nI0517 13:06:32.820925 330 log.go:172] (0xc0009b6420) Reply frame received for 1\nI0517 13:06:32.820966 330 log.go:172] (0xc0009b6420) (0xc0007da000) Create stream\nI0517 13:06:32.820978 330 log.go:172] (0xc0009b6420) (0xc0007da000) Stream added, broadcasting: 3\nI0517 13:06:32.822085 330 log.go:172] (0xc0009b6420) Reply frame received for 3\nI0517 13:06:32.822124 330 log.go:172] (0xc0009b6420) (0xc0007da0a0) Create stream\nI0517 13:06:32.822135 330 log.go:172] (0xc0009b6420) (0xc0007da0a0) Stream added, broadcasting: 5\nI0517 13:06:32.822974 330 log.go:172] (0xc0009b6420) Reply frame received for 5\nI0517 13:06:32.905998 330 log.go:172] (0xc0009b6420) Data frame received for 5\nI0517 13:06:32.906038 330 log.go:172] (0xc0007da0a0) (5) Data frame handling\nI0517 13:06:32.906071 330 log.go:172] (0xc0007da0a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0517 13:06:32.944795 330 log.go:172] (0xc0009b6420) Data frame received for 3\nI0517 13:06:32.944823 330 log.go:172] (0xc0007da000) (3) Data frame handling\nI0517 13:06:32.944857 330 log.go:172] (0xc0007da000) (3) Data frame sent\nI0517 13:06:32.944884 330 log.go:172] (0xc0009b6420) Data frame received for 3\nI0517 13:06:32.944900 330 log.go:172] (0xc0007da000) (3) Data frame handling\nI0517 13:06:32.944927 330 log.go:172] (0xc0009b6420) Data frame received for 5\nI0517 13:06:32.944942 330 log.go:172] (0xc0007da0a0) (5) Data frame handling\nI0517 13:06:32.947234 330 log.go:172] (0xc0009b6420) Data frame received for 1\nI0517 13:06:32.947254 330 log.go:172] (0xc0007da6e0) (1) Data frame handling\nI0517 13:06:32.947265 330 log.go:172] (0xc0007da6e0) (1) Data frame sent\nI0517 13:06:32.947285 330 log.go:172] (0xc0009b6420) (0xc0007da6e0) Stream removed, broadcasting: 1\nI0517 13:06:32.947309 330 log.go:172] (0xc0009b6420) Go away received\nI0517 13:06:32.947781 330 log.go:172] (0xc0009b6420) (0xc0007da6e0) Stream removed, broadcasting: 1\nI0517 13:06:32.947809 330 log.go:172] (0xc0009b6420) (0xc0007da000) Stream removed, broadcasting: 3\nI0517 13:06:32.947820 330 log.go:172] (0xc0009b6420) (0xc0007da0a0) Stream removed, broadcasting: 5\n" May 17 13:06:32.954: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 17 13:06:32.954: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 17 13:06:43.016: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 17 13:06:53.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4835 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 17 13:06:53.277: INFO: stderr: "I0517 13:06:53.178182 350 log.go:172] (0xc0009386e0) (0xc0007a4b40) Create stream\nI0517 13:06:53.178249 350 log.go:172] (0xc0009386e0) (0xc0007a4b40) Stream added, broadcasting: 1\nI0517 13:06:53.181108 350 log.go:172] (0xc0009386e0) Reply frame received for 1\nI0517 13:06:53.181320 350 log.go:172] (0xc0009386e0) (0xc000820000) Create stream\nI0517 13:06:53.181359 350 log.go:172] (0xc0009386e0) (0xc000820000) Stream added, broadcasting: 3\nI0517 13:06:53.182526 350 log.go:172] (0xc0009386e0) Reply frame received for 3\nI0517 13:06:53.182576 350 log.go:172] (0xc0009386e0) (0xc0007a4be0) Create stream\nI0517 13:06:53.182597 350 log.go:172] (0xc0009386e0) (0xc0007a4be0) Stream added, broadcasting: 5\nI0517 13:06:53.183675 350 log.go:172] (0xc0009386e0) Reply frame received for 5\nI0517 13:06:53.270933 350 log.go:172] (0xc0009386e0) Data frame received for 3\nI0517 13:06:53.271000 350 log.go:172] (0xc000820000) (3) Data frame handling\nI0517 13:06:53.271025 350 log.go:172] (0xc000820000) (3) Data frame sent\nI0517 13:06:53.271043 350 log.go:172] (0xc0009386e0) Data frame received for 3\nI0517 13:06:53.271058 350 log.go:172] (0xc000820000) (3) Data frame handling\nI0517 13:06:53.271083 350 log.go:172] (0xc0009386e0) Data frame received for 5\nI0517 13:06:53.271098 350 log.go:172] (0xc0007a4be0) (5) Data frame handling\nI0517 13:06:53.271115 350 log.go:172] (0xc0007a4be0) (5) Data frame sent\nI0517 13:06:53.271127 350 log.go:172] (0xc0009386e0) Data frame received for 5\nI0517 13:06:53.271148 350 log.go:172] (0xc0007a4be0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0517 13:06:53.272429 350 log.go:172] (0xc0009386e0) Data frame received for 1\nI0517 13:06:53.272456 350 log.go:172] (0xc0007a4b40) (1) Data frame handling\nI0517 13:06:53.272467 350 log.go:172] (0xc0007a4b40) (1) Data frame sent\nI0517 13:06:53.272485 350 log.go:172] (0xc0009386e0) (0xc0007a4b40) Stream removed, broadcasting: 1\nI0517 13:06:53.272807 350 log.go:172] (0xc0009386e0) (0xc0007a4b40) Stream removed, broadcasting: 1\nI0517 13:06:53.272829 350 log.go:172] (0xc0009386e0) (0xc000820000) Stream removed, broadcasting: 3\nI0517 13:06:53.272869 350 log.go:172] (0xc0009386e0) Go away received\nI0517 13:06:53.273055 350 log.go:172] (0xc0009386e0) (0xc0007a4be0) Stream removed, broadcasting: 5\n" May 17 13:06:53.277: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 17 13:06:53.277: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 17 13:07:23.386: INFO: Waiting for StatefulSet statefulset-4835/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 17 13:07:33.394: INFO: Deleting all statefulset in ns statefulset-4835 May 17 13:07:33.397: INFO: Scaling statefulset ss2 to 0 May 17 13:08:03.411: INFO: Waiting for statefulset status.replicas updated to 0 May 17 13:08:03.414: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:08:03.429: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4835" for this suite. May 17 13:08:11.449: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:08:11.546: INFO: namespace statefulset-4835 deletion completed in 8.114807913s • [SLOW TEST:159.618 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:08:11.547: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 17 13:08:18.349: INFO: 10 pods remaining May 17 13:08:18.349: INFO: 9 pods has nil DeletionTimestamp May 17 13:08:18.349: INFO: May 17 13:08:20.582: INFO: 0 pods remaining May 17 13:08:20.582: INFO: 0 pods has nil DeletionTimestamp May 17 13:08:20.582: INFO: May 17 13:08:20.838: INFO: 0 pods remaining May 17 13:08:20.838: INFO: 0 pods has nil DeletionTimestamp May 17 13:08:20.838: INFO: STEP: Gathering metrics W0517 13:08:21.966245 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 17 13:08:21.966: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:08:21.966: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6630" for this suite. May 17 13:08:28.031: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:08:28.127: INFO: namespace gc-6630 deletion completed in 6.158794923s • [SLOW TEST:16.580 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:08:28.127: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-ee0733e7-7b48-4c51-a1bc-dffe3486bac0 STEP: Creating a pod to test consume secrets May 17 13:08:28.225: INFO: Waiting up to 5m0s for pod "pod-secrets-fb4cc9b0-4e57-4f4e-9209-636f6d3cee2b" in namespace "secrets-5468" to be "success or failure" May 17 13:08:28.248: INFO: Pod "pod-secrets-fb4cc9b0-4e57-4f4e-9209-636f6d3cee2b": Phase="Pending", Reason="", readiness=false. Elapsed: 22.651664ms May 17 13:08:30.252: INFO: Pod "pod-secrets-fb4cc9b0-4e57-4f4e-9209-636f6d3cee2b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02715459s May 17 13:08:32.257: INFO: Pod "pod-secrets-fb4cc9b0-4e57-4f4e-9209-636f6d3cee2b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031259531s STEP: Saw pod success May 17 13:08:32.257: INFO: Pod "pod-secrets-fb4cc9b0-4e57-4f4e-9209-636f6d3cee2b" satisfied condition "success or failure" May 17 13:08:32.260: INFO: Trying to get logs from node iruya-worker pod pod-secrets-fb4cc9b0-4e57-4f4e-9209-636f6d3cee2b container secret-volume-test: STEP: delete the pod May 17 13:08:32.279: INFO: Waiting for pod pod-secrets-fb4cc9b0-4e57-4f4e-9209-636f6d3cee2b to disappear May 17 13:08:32.313: INFO: Pod pod-secrets-fb4cc9b0-4e57-4f4e-9209-636f6d3cee2b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:08:32.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5468" for this suite. May 17 13:08:38.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:08:38.410: INFO: namespace secrets-5468 deletion completed in 6.093301523s • [SLOW TEST:10.283 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:08:38.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-88300400-ac4f-4bee-9e18-4f0a611a8c66 STEP: Creating a pod to test consume secrets May 17 13:08:38.524: INFO: Waiting up to 5m0s for pod "pod-secrets-7e4d027b-f524-4a11-9484-bf693723aec8" in namespace "secrets-2081" to be "success or failure" May 17 13:08:38.547: INFO: Pod "pod-secrets-7e4d027b-f524-4a11-9484-bf693723aec8": Phase="Pending", Reason="", readiness=false. Elapsed: 23.069901ms May 17 13:08:40.552: INFO: Pod "pod-secrets-7e4d027b-f524-4a11-9484-bf693723aec8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027838716s May 17 13:08:42.556: INFO: Pod "pod-secrets-7e4d027b-f524-4a11-9484-bf693723aec8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032523634s STEP: Saw pod success May 17 13:08:42.556: INFO: Pod "pod-secrets-7e4d027b-f524-4a11-9484-bf693723aec8" satisfied condition "success or failure" May 17 13:08:42.560: INFO: Trying to get logs from node iruya-worker pod pod-secrets-7e4d027b-f524-4a11-9484-bf693723aec8 container secret-volume-test: STEP: delete the pod May 17 13:08:42.626: INFO: Waiting for pod pod-secrets-7e4d027b-f524-4a11-9484-bf693723aec8 to disappear May 17 13:08:42.631: INFO: Pod pod-secrets-7e4d027b-f524-4a11-9484-bf693723aec8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:08:42.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-2081" for this suite. May 17 13:08:48.660: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:08:48.733: INFO: namespace secrets-2081 deletion completed in 6.098892539s • [SLOW TEST:10.323 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:08:48.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 17 13:08:49.370: INFO: Pod name wrapped-volume-race-4a3aaf82-9b8d-4e41-9fbd-0ffae885a375: Found 0 pods out of 5 May 17 13:08:54.379: INFO: Pod name wrapped-volume-race-4a3aaf82-9b8d-4e41-9fbd-0ffae885a375: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-4a3aaf82-9b8d-4e41-9fbd-0ffae885a375 in namespace emptydir-wrapper-1245, will wait for the garbage collector to delete the pods May 17 13:09:08.463: INFO: Deleting ReplicationController wrapped-volume-race-4a3aaf82-9b8d-4e41-9fbd-0ffae885a375 took: 11.032066ms May 17 13:09:08.763: INFO: Terminating ReplicationController wrapped-volume-race-4a3aaf82-9b8d-4e41-9fbd-0ffae885a375 pods took: 300.265951ms STEP: Creating RC which spawns configmap-volume pods May 17 13:09:53.292: INFO: Pod name wrapped-volume-race-0fc70b37-8038-4fa8-be24-e1991ab46992: Found 0 pods out of 5 May 17 13:09:58.301: INFO: Pod name wrapped-volume-race-0fc70b37-8038-4fa8-be24-e1991ab46992: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-0fc70b37-8038-4fa8-be24-e1991ab46992 in namespace emptydir-wrapper-1245, will wait for the garbage collector to delete the pods May 17 13:10:12.409: INFO: Deleting ReplicationController wrapped-volume-race-0fc70b37-8038-4fa8-be24-e1991ab46992 took: 32.537633ms May 17 13:10:12.710: INFO: Terminating ReplicationController wrapped-volume-race-0fc70b37-8038-4fa8-be24-e1991ab46992 pods took: 300.36016ms STEP: Creating RC which spawns configmap-volume pods May 17 13:10:53.237: INFO: Pod name wrapped-volume-race-605cf44c-b004-474c-97db-014ac7a9c8c7: Found 0 pods out of 5 May 17 13:10:58.243: INFO: Pod name wrapped-volume-race-605cf44c-b004-474c-97db-014ac7a9c8c7: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-605cf44c-b004-474c-97db-014ac7a9c8c7 in namespace emptydir-wrapper-1245, will wait for the garbage collector to delete the pods May 17 13:11:14.333: INFO: Deleting ReplicationController wrapped-volume-race-605cf44c-b004-474c-97db-014ac7a9c8c7 took: 7.839993ms May 17 13:11:14.633: INFO: Terminating ReplicationController wrapped-volume-race-605cf44c-b004-474c-97db-014ac7a9c8c7 pods took: 300.25941ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:11:53.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-1245" for this suite. May 17 13:12:01.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:12:01.379: INFO: namespace emptydir-wrapper-1245 deletion completed in 8.098873025s • [SLOW TEST:192.646 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:12:01.381: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 13:12:01.436: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:12:02.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9190" for this suite. May 17 13:12:08.589: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:12:08.673: INFO: namespace custom-resource-definition-9190 deletion completed in 6.102411061s • [SLOW TEST:7.293 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:12:08.674: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 13:12:08.744: INFO: Create a RollingUpdate DaemonSet May 17 13:12:08.808: INFO: Check that daemon pods launch on every node of the cluster May 17 13:12:08.813: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:12:08.856: INFO: Number of nodes with available pods: 0 May 17 13:12:08.856: INFO: Node iruya-worker is running more than one daemon pod May 17 13:12:09.861: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:12:09.864: INFO: Number of nodes with available pods: 0 May 17 13:12:09.864: INFO: Node iruya-worker is running more than one daemon pod May 17 13:12:10.862: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:12:10.865: INFO: Number of nodes with available pods: 0 May 17 13:12:10.865: INFO: Node iruya-worker is running more than one daemon pod May 17 13:12:11.861: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:12:11.864: INFO: Number of nodes with available pods: 0 May 17 13:12:11.864: INFO: Node iruya-worker is running more than one daemon pod May 17 13:12:12.861: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:12:12.865: INFO: Number of nodes with available pods: 1 May 17 13:12:12.865: INFO: Node iruya-worker is running more than one daemon pod May 17 13:12:13.861: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:12:13.864: INFO: Number of nodes with available pods: 2 May 17 13:12:13.864: INFO: Number of running nodes: 2, number of available pods: 2 May 17 13:12:13.864: INFO: Update the DaemonSet to trigger a rollout May 17 13:12:13.872: INFO: Updating DaemonSet daemon-set May 17 13:12:22.908: INFO: Roll back the DaemonSet before rollout is complete May 17 13:12:22.914: INFO: Updating DaemonSet daemon-set May 17 13:12:22.914: INFO: Make sure DaemonSet rollback is complete May 17 13:12:22.964: INFO: Wrong image for pod: daemon-set-srksv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 17 13:12:22.964: INFO: Pod daemon-set-srksv is not available May 17 13:12:22.967: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:12:23.972: INFO: Wrong image for pod: daemon-set-srksv. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. May 17 13:12:23.972: INFO: Pod daemon-set-srksv is not available May 17 13:12:23.977: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:12:25.058: INFO: Pod daemon-set-vrtdj is not available May 17 13:12:25.071: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-852, will wait for the garbage collector to delete the pods May 17 13:12:25.192: INFO: Deleting DaemonSet.extensions daemon-set took: 34.692501ms May 17 13:12:25.493: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.472825ms May 17 13:12:29.596: INFO: Number of nodes with available pods: 0 May 17 13:12:29.596: INFO: Number of running nodes: 0, number of available pods: 0 May 17 13:12:29.602: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-852/daemonsets","resourceVersion":"11395852"},"items":null} May 17 13:12:29.605: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-852/pods","resourceVersion":"11395852"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:12:29.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-852" for this suite. May 17 13:12:35.632: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:12:35.709: INFO: namespace daemonsets-852 deletion completed in 6.089131088s • [SLOW TEST:27.035 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:12:35.710: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 17 13:12:35.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-2207' May 17 13:12:38.590: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 17 13:12:38.590: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc May 17 13:12:38.623: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-l9jwm] May 17 13:12:38.623: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-l9jwm" in namespace "kubectl-2207" to be "running and ready" May 17 13:12:38.628: INFO: Pod "e2e-test-nginx-rc-l9jwm": Phase="Pending", Reason="", readiness=false. Elapsed: 5.744047ms May 17 13:12:40.632: INFO: Pod "e2e-test-nginx-rc-l9jwm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009520064s May 17 13:12:42.637: INFO: Pod "e2e-test-nginx-rc-l9jwm": Phase="Running", Reason="", readiness=true. Elapsed: 4.013842706s May 17 13:12:42.637: INFO: Pod "e2e-test-nginx-rc-l9jwm" satisfied condition "running and ready" May 17 13:12:42.637: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-l9jwm] May 17 13:12:42.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-2207' May 17 13:12:42.748: INFO: stderr: "" May 17 13:12:42.748: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 May 17 13:12:42.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-2207' May 17 13:12:42.848: INFO: stderr: "" May 17 13:12:42.848: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:12:42.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2207" for this suite. May 17 13:13:04.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:13:04.939: INFO: namespace kubectl-2207 deletion completed in 22.087228943s • [SLOW TEST:29.229 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:13:04.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 17 13:13:04.993: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:13:13.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-7107" for this suite. May 17 13:13:35.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:13:35.444: INFO: namespace init-container-7107 deletion completed in 22.084413263s • [SLOW TEST:30.505 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:13:35.445: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 13:13:35.507: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e49df5e1-3c62-4b6c-92f8-221fa1e7f17d" in namespace "projected-7147" to be "success or failure" May 17 13:13:35.538: INFO: Pod "downwardapi-volume-e49df5e1-3c62-4b6c-92f8-221fa1e7f17d": Phase="Pending", Reason="", readiness=false. Elapsed: 31.005851ms May 17 13:13:37.542: INFO: Pod "downwardapi-volume-e49df5e1-3c62-4b6c-92f8-221fa1e7f17d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03510067s May 17 13:13:39.546: INFO: Pod "downwardapi-volume-e49df5e1-3c62-4b6c-92f8-221fa1e7f17d": Phase="Running", Reason="", readiness=true. Elapsed: 4.039708774s May 17 13:13:41.551: INFO: Pod "downwardapi-volume-e49df5e1-3c62-4b6c-92f8-221fa1e7f17d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.044356975s STEP: Saw pod success May 17 13:13:41.551: INFO: Pod "downwardapi-volume-e49df5e1-3c62-4b6c-92f8-221fa1e7f17d" satisfied condition "success or failure" May 17 13:13:41.555: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e49df5e1-3c62-4b6c-92f8-221fa1e7f17d container client-container: STEP: delete the pod May 17 13:13:41.593: INFO: Waiting for pod downwardapi-volume-e49df5e1-3c62-4b6c-92f8-221fa1e7f17d to disappear May 17 13:13:41.623: INFO: Pod downwardapi-volume-e49df5e1-3c62-4b6c-92f8-221fa1e7f17d no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:13:41.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7147" for this suite. May 17 13:13:47.644: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:13:47.741: INFO: namespace projected-7147 deletion completed in 6.115047276s • [SLOW TEST:12.296 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:13:47.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-f34e564e-91ec-4130-8b0c-bb95223b726b STEP: Creating a pod to test consume configMaps May 17 13:13:47.822: INFO: Waiting up to 5m0s for pod "pod-configmaps-c92d9539-f438-4893-a232-f3dd5632e3f8" in namespace "configmap-4392" to be "success or failure" May 17 13:13:47.826: INFO: Pod "pod-configmaps-c92d9539-f438-4893-a232-f3dd5632e3f8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.921266ms May 17 13:13:49.830: INFO: Pod "pod-configmaps-c92d9539-f438-4893-a232-f3dd5632e3f8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008544365s May 17 13:13:51.835: INFO: Pod "pod-configmaps-c92d9539-f438-4893-a232-f3dd5632e3f8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012980658s STEP: Saw pod success May 17 13:13:51.835: INFO: Pod "pod-configmaps-c92d9539-f438-4893-a232-f3dd5632e3f8" satisfied condition "success or failure" May 17 13:13:51.839: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-c92d9539-f438-4893-a232-f3dd5632e3f8 container configmap-volume-test: STEP: delete the pod May 17 13:13:52.067: INFO: Waiting for pod pod-configmaps-c92d9539-f438-4893-a232-f3dd5632e3f8 to disappear May 17 13:13:52.180: INFO: Pod pod-configmaps-c92d9539-f438-4893-a232-f3dd5632e3f8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:13:52.180: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4392" for this suite. May 17 13:13:58.201: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:13:58.278: INFO: namespace configmap-4392 deletion completed in 6.093051114s • [SLOW TEST:10.536 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:13:58.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-projected-f85j STEP: Creating a pod to test atomic-volume-subpath May 17 13:13:58.381: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-f85j" in namespace "subpath-1877" to be "success or failure" May 17 13:13:58.419: INFO: Pod "pod-subpath-test-projected-f85j": Phase="Pending", Reason="", readiness=false. Elapsed: 38.506162ms May 17 13:14:00.423: INFO: Pod "pod-subpath-test-projected-f85j": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042316827s May 17 13:14:02.427: INFO: Pod "pod-subpath-test-projected-f85j": Phase="Running", Reason="", readiness=true. Elapsed: 4.046358853s May 17 13:14:04.431: INFO: Pod "pod-subpath-test-projected-f85j": Phase="Running", Reason="", readiness=true. Elapsed: 6.050626298s May 17 13:14:06.436: INFO: Pod "pod-subpath-test-projected-f85j": Phase="Running", Reason="", readiness=true. Elapsed: 8.055167863s May 17 13:14:08.440: INFO: Pod "pod-subpath-test-projected-f85j": Phase="Running", Reason="", readiness=true. Elapsed: 10.058678979s May 17 13:14:10.444: INFO: Pod "pod-subpath-test-projected-f85j": Phase="Running", Reason="", readiness=true. Elapsed: 12.062739972s May 17 13:14:12.448: INFO: Pod "pod-subpath-test-projected-f85j": Phase="Running", Reason="", readiness=true. Elapsed: 14.066999435s May 17 13:14:14.453: INFO: Pod "pod-subpath-test-projected-f85j": Phase="Running", Reason="", readiness=true. Elapsed: 16.071682046s May 17 13:14:16.458: INFO: Pod "pod-subpath-test-projected-f85j": Phase="Running", Reason="", readiness=true. Elapsed: 18.076958021s May 17 13:14:18.461: INFO: Pod "pod-subpath-test-projected-f85j": Phase="Running", Reason="", readiness=true. Elapsed: 20.080646857s May 17 13:14:20.466: INFO: Pod "pod-subpath-test-projected-f85j": Phase="Running", Reason="", readiness=true. Elapsed: 22.084857433s May 17 13:14:22.476: INFO: Pod "pod-subpath-test-projected-f85j": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.095150357s STEP: Saw pod success May 17 13:14:22.476: INFO: Pod "pod-subpath-test-projected-f85j" satisfied condition "success or failure" May 17 13:14:22.478: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-projected-f85j container test-container-subpath-projected-f85j: STEP: delete the pod May 17 13:14:22.500: INFO: Waiting for pod pod-subpath-test-projected-f85j to disappear May 17 13:14:22.511: INFO: Pod pod-subpath-test-projected-f85j no longer exists STEP: Deleting pod pod-subpath-test-projected-f85j May 17 13:14:22.511: INFO: Deleting pod "pod-subpath-test-projected-f85j" in namespace "subpath-1877" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:14:22.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1877" for this suite. May 17 13:14:28.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:14:28.605: INFO: namespace subpath-1877 deletion completed in 6.088846086s • [SLOW TEST:30.327 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:14:28.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 13:14:28.667: INFO: Waiting up to 5m0s for pod "downwardapi-volume-415a9519-935f-4074-9bfa-8350b5aad821" in namespace "downward-api-3985" to be "success or failure" May 17 13:14:28.670: INFO: Pod "downwardapi-volume-415a9519-935f-4074-9bfa-8350b5aad821": Phase="Pending", Reason="", readiness=false. Elapsed: 3.545042ms May 17 13:14:30.675: INFO: Pod "downwardapi-volume-415a9519-935f-4074-9bfa-8350b5aad821": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008198857s May 17 13:14:32.679: INFO: Pod "downwardapi-volume-415a9519-935f-4074-9bfa-8350b5aad821": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012171876s STEP: Saw pod success May 17 13:14:32.679: INFO: Pod "downwardapi-volume-415a9519-935f-4074-9bfa-8350b5aad821" satisfied condition "success or failure" May 17 13:14:32.682: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-415a9519-935f-4074-9bfa-8350b5aad821 container client-container: STEP: delete the pod May 17 13:14:32.733: INFO: Waiting for pod downwardapi-volume-415a9519-935f-4074-9bfa-8350b5aad821 to disappear May 17 13:14:32.736: INFO: Pod downwardapi-volume-415a9519-935f-4074-9bfa-8350b5aad821 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:14:32.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3985" for this suite. May 17 13:14:38.752: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:14:38.831: INFO: namespace downward-api-3985 deletion completed in 6.091701858s • [SLOW TEST:10.226 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:14:38.832: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 17 13:14:38.884: INFO: namespace kubectl-3978 May 17 13:14:38.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3978' May 17 13:14:39.223: INFO: stderr: "" May 17 13:14:39.223: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 17 13:14:40.229: INFO: Selector matched 1 pods for map[app:redis] May 17 13:14:40.229: INFO: Found 0 / 1 May 17 13:14:41.227: INFO: Selector matched 1 pods for map[app:redis] May 17 13:14:41.227: INFO: Found 0 / 1 May 17 13:14:42.227: INFO: Selector matched 1 pods for map[app:redis] May 17 13:14:42.227: INFO: Found 0 / 1 May 17 13:14:43.227: INFO: Selector matched 1 pods for map[app:redis] May 17 13:14:43.227: INFO: Found 1 / 1 May 17 13:14:43.227: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 17 13:14:43.231: INFO: Selector matched 1 pods for map[app:redis] May 17 13:14:43.231: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 17 13:14:43.231: INFO: wait on redis-master startup in kubectl-3978 May 17 13:14:43.231: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-9p7xr redis-master --namespace=kubectl-3978' May 17 13:14:43.341: INFO: stderr: "" May 17 13:14:43.341: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 17 May 13:14:42.224 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 May 13:14:42.224 # Server started, Redis version 3.2.12\n1:M 17 May 13:14:42.224 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 May 13:14:42.224 * The server is now ready to accept connections on port 6379\n" STEP: exposing RC May 17 13:14:43.341: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-3978' May 17 13:14:43.478: INFO: stderr: "" May 17 13:14:43.478: INFO: stdout: "service/rm2 exposed\n" May 17 13:14:43.488: INFO: Service rm2 in namespace kubectl-3978 found. STEP: exposing service May 17 13:14:45.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-3978' May 17 13:14:45.629: INFO: stderr: "" May 17 13:14:45.629: INFO: stdout: "service/rm3 exposed\n" May 17 13:14:45.656: INFO: Service rm3 in namespace kubectl-3978 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:14:47.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3978" for this suite. May 17 13:15:09.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:15:09.747: INFO: namespace kubectl-3978 deletion completed in 22.079385942s • [SLOW TEST:30.915 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:15:09.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 13:15:09.902: INFO: Creating deployment "nginx-deployment" May 17 13:15:09.920: INFO: Waiting for observed generation 1 May 17 13:15:12.002: INFO: Waiting for all required pods to come up May 17 13:15:12.033: INFO: Pod name nginx: Found 10 pods out of 10 STEP: ensuring each pod is running May 17 13:15:22.096: INFO: Waiting for deployment "nginx-deployment" to complete May 17 13:15:22.109: INFO: Updating deployment "nginx-deployment" with a non-existent image May 17 13:15:22.114: INFO: Updating deployment nginx-deployment May 17 13:15:22.114: INFO: Waiting for observed generation 2 May 17 13:15:24.326: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 17 13:15:25.023: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 17 13:15:25.296: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 17 13:15:25.455: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 17 13:15:25.455: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 17 13:15:25.457: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas May 17 13:15:25.462: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas May 17 13:15:25.462: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30 May 17 13:15:25.467: INFO: Updating deployment nginx-deployment May 17 13:15:25.467: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas May 17 13:15:26.170: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 17 13:15:28.380: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 17 13:15:28.584: INFO: Deployment "nginx-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-9033,SelfLink:/apis/apps/v1/namespaces/deployment-9033/deployments/nginx-deployment,UID:043086e5-f4b8-4b12-ad28-8bae56d3240e,ResourceVersion:11396693,Generation:3,CreationTimestamp:2020-05-17 13:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Available False 2020-05-17 13:15:25 +0000 UTC 2020-05-17 13:15:25 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-17 13:15:26 +0000 UTC 2020-05-17 13:15:09 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.}],ReadyReplicas:8,CollisionCount:nil,},} May 17 13:15:29.112: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-9033,SelfLink:/apis/apps/v1/namespaces/deployment-9033/replicasets/nginx-deployment-55fb7cb77f,UID:8f14bec5-8c3b-4943-ba1b-3425b9f339fd,ResourceVersion:11396692,Generation:3,CreationTimestamp:2020-05-17 13:15:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 043086e5-f4b8-4b12-ad28-8bae56d3240e 0xc002ad29a7 0xc002ad29a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 17 13:15:29.112: INFO: All old ReplicaSets of Deployment "nginx-deployment": May 17 13:15:29.112: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-9033,SelfLink:/apis/apps/v1/namespaces/deployment-9033/replicasets/nginx-deployment-7b8c6f4498,UID:2e1677fd-b8ee-4196-81bd-101d8bfcc2c2,ResourceVersion:11396675,Generation:3,CreationTimestamp:2020-05-17 13:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 043086e5-f4b8-4b12-ad28-8bae56d3240e 0xc002ad2a77 0xc002ad2a78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},} May 17 13:15:29.378: INFO: Pod "nginx-deployment-55fb7cb77f-5fq4n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5fq4n,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-55fb7cb77f-5fq4n,UID:1bd6c4c5-1fd0-49f3-a965-f5fd157937a8,ResourceVersion:11396640,Generation:0,CreationTimestamp:2020-05-17 13:15:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f14bec5-8c3b-4943-ba1b-3425b9f339fd 0xc002ad3407 0xc002ad3408}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad3480} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad34a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.74,StartTime:2020-05-17 13:15:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.378: INFO: Pod "nginx-deployment-55fb7cb77f-8djcr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-8djcr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-55fb7cb77f-8djcr,UID:b57dda7a-91b2-425c-825b-82cf81637ff8,ResourceVersion:11396679,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f14bec5-8c3b-4943-ba1b-3425b9f339fd 0xc002ad3597 0xc002ad3598}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad3610} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad3630}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.378: INFO: Pod "nginx-deployment-55fb7cb77f-9lcjf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9lcjf,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-55fb7cb77f-9lcjf,UID:d2548574-ba48-4eb2-86f9-3a555a116a4b,ResourceVersion:11396709,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f14bec5-8c3b-4943-ba1b-3425b9f339fd 0xc002ad3707 0xc002ad3708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad3780} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad37a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.378: INFO: Pod "nginx-deployment-55fb7cb77f-9xn92" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-9xn92,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-55fb7cb77f-9xn92,UID:c12b1bcc-6327-4676-8f1f-c5cefe9f1560,ResourceVersion:11396710,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f14bec5-8c3b-4943-ba1b-3425b9f339fd 0xc002ad3877 0xc002ad3878}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad38f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad3910}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.379: INFO: Pod "nginx-deployment-55fb7cb77f-chlmt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-chlmt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-55fb7cb77f-chlmt,UID:f0d59b34-1e71-42d5-8914-40530153567d,ResourceVersion:11396591,Generation:0,CreationTimestamp:2020-05-17 13:15:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f14bec5-8c3b-4943-ba1b-3425b9f339fd 0xc002ad39e7 0xc002ad39e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad3a60} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad3a80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-17 13:15:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.379: INFO: Pod "nginx-deployment-55fb7cb77f-flddt" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-flddt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-55fb7cb77f-flddt,UID:2af67dad-f8b0-4d79-b94b-2a45db4ad8ef,ResourceVersion:11396594,Generation:0,CreationTimestamp:2020-05-17 13:15:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f14bec5-8c3b-4943-ba1b-3425b9f339fd 0xc002ad3b57 0xc002ad3b58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad3be0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad3c00}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-17 13:15:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.379: INFO: Pod "nginx-deployment-55fb7cb77f-fqhb4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-fqhb4,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-55fb7cb77f-fqhb4,UID:47720d9d-4351-4742-b0e5-c9067a2ea9c8,ResourceVersion:11396645,Generation:0,CreationTimestamp:2020-05-17 13:15:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f14bec5-8c3b-4943-ba1b-3425b9f339fd 0xc002ad3cd7 0xc002ad3cd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad3d50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad3d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.190,StartTime:2020-05-17 13:15:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.379: INFO: Pod "nginx-deployment-55fb7cb77f-jm777" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-jm777,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-55fb7cb77f-jm777,UID:1101fbff-ccfd-45a1-b6e6-647de19d77b4,ResourceVersion:11396744,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f14bec5-8c3b-4943-ba1b-3425b9f339fd 0xc002ad3e67 0xc002ad3e68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002ad3ee0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002ad3f10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.379: INFO: Pod "nginx-deployment-55fb7cb77f-kdfdk" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-kdfdk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-55fb7cb77f-kdfdk,UID:0d41d033-028d-4b6d-b5d3-3eddbc4eda21,ResourceVersion:11396694,Generation:0,CreationTimestamp:2020-05-17 13:15:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f14bec5-8c3b-4943-ba1b-3425b9f339fd 0xc002ad3fe7 0xc002ad3fe8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb2060} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb2080}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.379: INFO: Pod "nginx-deployment-55fb7cb77f-lmvhz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lmvhz,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-55fb7cb77f-lmvhz,UID:5e5f8ed0-6111-4671-b5bf-85a42dd89b1e,ResourceVersion:11396739,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f14bec5-8c3b-4943-ba1b-3425b9f339fd 0xc002eb2157 0xc002eb2158}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb21d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb21f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.379: INFO: Pod "nginx-deployment-55fb7cb77f-lvhsc" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-lvhsc,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-55fb7cb77f-lvhsc,UID:1fc73748-4ad3-40f9-a020-b9a46f359923,ResourceVersion:11396717,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f14bec5-8c3b-4943-ba1b-3425b9f339fd 0xc002eb22c7 0xc002eb22c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb2340} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb2360}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.380: INFO: Pod "nginx-deployment-55fb7cb77f-wgdvl" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-wgdvl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-55fb7cb77f-wgdvl,UID:856c0f2e-8fd5-4580-8f89-a164ed0ad14c,ResourceVersion:11396747,Generation:0,CreationTimestamp:2020-05-17 13:15:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f14bec5-8c3b-4943-ba1b-3425b9f339fd 0xc002eb2437 0xc002eb2438}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb24b0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb24d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.191,StartTime:2020-05-17 13:15:22 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = NotFound desc = failed to pull and unpack image "docker.io/library/nginx:404": failed to resolve reference "docker.io/library/nginx:404": docker.io/library/nginx:404: not found,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.380: INFO: Pod "nginx-deployment-55fb7cb77f-ztmkg" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-ztmkg,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-55fb7cb77f-ztmkg,UID:b377bd2d-2cfb-42cd-a549-513e1d57b8c3,ResourceVersion:11396730,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f 8f14bec5-8c3b-4943-ba1b-3425b9f339fd 0xc002eb25c7 0xc002eb25c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb2650} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb2670}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404 }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.380: INFO: Pod "nginx-deployment-7b8c6f4498-5dx2c" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-5dx2c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-5dx2c,UID:540dd78f-e8d7-45cf-af7f-46269d488d2c,ResourceVersion:11396480,Generation:0,CreationTimestamp:2020-05-17 13:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb2747 0xc002eb2748}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb27c0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb27e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:14 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:14 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:09 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.185,StartTime:2020-05-17 13:15:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-17 13:15:14 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c346e46fcad47cdccd7cdc519472ae15f88536d438942a4cf69cac590c274737}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.380: INFO: Pod "nginx-deployment-7b8c6f4498-6mdwp" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-6mdwp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-6mdwp,UID:154ae9a3-8f97-4d85-8d33-3c05ee5aba60,ResourceVersion:11396729,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb28b7 0xc002eb28b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb2930} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb2950}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.380: INFO: Pod "nginx-deployment-7b8c6f4498-7z5vr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-7z5vr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-7z5vr,UID:7539e730-3d95-4d50-8281-ad71e7409bc8,ResourceVersion:11396748,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb2a17 0xc002eb2a18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb2a90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb2ab0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.380: INFO: Pod "nginx-deployment-7b8c6f4498-9vbkr" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9vbkr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-9vbkr,UID:f9387351-fc9e-4f42-b65f-3e1f051c67a1,ResourceVersion:11396698,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb2b77 0xc002eb2b78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb2bf0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb2c10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.380: INFO: Pod "nginx-deployment-7b8c6f4498-b6crz" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-b6crz,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-b6crz,UID:6c9c745f-2d6b-4a18-9c43-e27b34b9b3a2,ResourceVersion:11396704,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb2cd7 0xc002eb2cd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb2d50} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb2d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.381: INFO: Pod "nginx-deployment-7b8c6f4498-fgf4f" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-fgf4f,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-fgf4f,UID:2e580040-8eff-4a18-b979-c9d928d31013,ResourceVersion:11396525,Generation:0,CreationTimestamp:2020-05-17 13:15:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb2e47 0xc002eb2e48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb2f10} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb2f30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.189,StartTime:2020-05-17 13:15:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-17 13:15:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://082e60766155b821abf93d9acc0639f0a53078c11e2c415f80cc6f3791f1792b}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.381: INFO: Pod "nginx-deployment-7b8c6f4498-hwgf8" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-hwgf8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-hwgf8,UID:cda772f7-a96a-4ba9-8c3f-74e043393140,ResourceVersion:11396527,Generation:0,CreationTimestamp:2020-05-17 13:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb3007 0xc002eb3008}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb3080} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb30a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:21 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:21 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.71,StartTime:2020-05-17 13:15:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-17 13:15:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://4edc61424dfebcb3a50e8114f054d4b97a474d01240ffaafd97aa515646ba55a}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.381: INFO: Pod "nginx-deployment-7b8c6f4498-jltdj" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-jltdj,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-jltdj,UID:3c7e8d71-b56f-4708-a3a0-fd20c487fd34,ResourceVersion:11396738,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb3177 0xc002eb3178}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb3200} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb3220}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.381: INFO: Pod "nginx-deployment-7b8c6f4498-llxgl" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-llxgl,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-llxgl,UID:34d2bcba-8437-4381-a600-f6aa6ec09bfb,ResourceVersion:11396517,Generation:0,CreationTimestamp:2020-05-17 13:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb3327 0xc002eb3328}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb34e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb3500}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:09 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.69,StartTime:2020-05-17 13:15:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-17 13:15:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://c3f362eda97a08b7db07e17bdf3b034eb8fb3cb205e71974e9c1df61f2391f1f}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.381: INFO: Pod "nginx-deployment-7b8c6f4498-nxr7l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-nxr7l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-nxr7l,UID:f3bb015a-38d6-4599-91f5-7ab9b2794fe7,ResourceVersion:11396671,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb35e7 0xc002eb35e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb3660} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb3680}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.381: INFO: Pod "nginx-deployment-7b8c6f4498-qphcv" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qphcv,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-qphcv,UID:f8107a55-13f4-4519-9ad6-c7e080ccaa51,ResourceVersion:11396690,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb3707 0xc002eb3708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb3780} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb37a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.382: INFO: Pod "nginx-deployment-7b8c6f4498-r9v4n" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r9v4n,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-r9v4n,UID:e3d4313c-3d22-4c1f-8f29-7b08d2618dca,ResourceVersion:11396687,Generation:0,CreationTimestamp:2020-05-17 13:15:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb3867 0xc002eb3868}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb38e0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb3900}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.382: INFO: Pod "nginx-deployment-7b8c6f4498-rctkm" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rctkm,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-rctkm,UID:7dfb1ef8-af76-4429-a100-43aadd4e2c58,ResourceVersion:11396512,Generation:0,CreationTimestamp:2020-05-17 13:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb39c7 0xc002eb39c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb3a40} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb3a60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:10.244.1.70,StartTime:2020-05-17 13:15:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-17 13:15:19 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://9340cce861d0320ecdce6686d652dc351fdb8366a0e0c8262c16555adac1efa6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.382: INFO: Pod "nginx-deployment-7b8c6f4498-rkck7" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rkck7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-rkck7,UID:3a8900de-9e73-499e-83ae-2ea67bd010d0,ResourceVersion:11396529,Generation:0,CreationTimestamp:2020-05-17 13:15:10 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb3b37 0xc002eb3b38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb3bb0} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb3be0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:20 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:20 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.188,StartTime:2020-05-17 13:15:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-17 13:15:20 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://6c5cf8c46a3489b4be65d0750114681a564161eee3ff96bde1e36bab8752692d}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.382: INFO: Pod "nginx-deployment-7b8c6f4498-snqpf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-snqpf,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-snqpf,UID:59fd517c-d5d2-4300-8e8d-07173aa60a22,ResourceVersion:11396701,Generation:0,CreationTimestamp:2020-05-17 13:15:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb3cb7 0xc002eb3cb8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb3d30} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb3d50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.382: INFO: Pod "nginx-deployment-7b8c6f4498-stz2s" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-stz2s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-stz2s,UID:b7fe165b-8a2b-4b0c-9a50-8af5b1b038c8,ResourceVersion:11396746,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb3e17 0xc002eb3e18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb3e90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002eb3eb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.382: INFO: Pod "nginx-deployment-7b8c6f4498-vx78v" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-vx78v,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-vx78v,UID:4b1e0cfb-fc12-47b1-9f89-373a3c2885ba,ResourceVersion:11396501,Generation:0,CreationTimestamp:2020-05-17 13:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc002eb3f77 0xc002eb3f78}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002eb3ff0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000964010}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:18 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:18 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:10 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.187,StartTime:2020-05-17 13:15:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-17 13:15:17 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://f109c1554338edf625aa03008e0dac7c4bbf6c6446e2ffc0980fccbe4092f304}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.382: INFO: Pod "nginx-deployment-7b8c6f4498-x7d6l" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-x7d6l,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-x7d6l,UID:05380748-97aa-4ba3-b024-2b2fff8407d0,ResourceVersion:11396677,Generation:0,CreationTimestamp:2020-05-17 13:15:25 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc0009640e7 0xc0009640e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000964160} {node.kubernetes.io/unreachable Exists NoExecute 0xc000964180}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:25 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.383: INFO: Pod "nginx-deployment-7b8c6f4498-zlvxq" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zlvxq,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-zlvxq,UID:14181f90-eba5-4249-8082-f2c30e787da0,ResourceVersion:11396492,Generation:0,CreationTimestamp:2020-05-17 13:15:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc000964277 0xc000964278}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0009642f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000964310}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:10 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:16 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:16 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:09 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.186,StartTime:2020-05-17 13:15:10 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-17 13:15:16 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://77e9611cc50b2a06840f0c7d74b4017e0f9aad4934701453c7bdd3ba0d0db202}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 13:15:29.383: INFO: Pod "nginx-deployment-7b8c6f4498-zptl9" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-zptl9,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-9033,SelfLink:/api/v1/namespaces/deployment-9033/pods/nginx-deployment-7b8c6f4498-zptl9,UID:f6fd19b6-1985-4979-99a7-4f40363cedd1,ResourceVersion:11396719,Generation:0,CreationTimestamp:2020-05-17 13:15:26 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 2e1677fd-b8ee-4196-81bd-101d8bfcc2c2 0xc0009643e7 0xc0009643e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-vnxp8 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-vnxp8,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-vnxp8 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000964460} {node.kubernetes.io/unreachable Exists NoExecute 0xc000964480}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:15:26 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.5,PodIP:,StartTime:2020-05-17 13:15:26 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:15:29.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9033" for this suite. May 17 13:15:52.293: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:15:52.369: INFO: namespace deployment-9033 deletion completed in 22.83801017s • [SLOW TEST:42.622 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:15:52.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test env composition May 17 13:15:52.491: INFO: Waiting up to 5m0s for pod "var-expansion-5f91ba94-2a36-4dc2-acd9-bf637c291950" in namespace "var-expansion-4404" to be "success or failure" May 17 13:15:52.493: INFO: Pod "var-expansion-5f91ba94-2a36-4dc2-acd9-bf637c291950": Phase="Pending", Reason="", readiness=false. Elapsed: 2.55262ms May 17 13:15:54.497: INFO: Pod "var-expansion-5f91ba94-2a36-4dc2-acd9-bf637c291950": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006367767s May 17 13:15:56.500: INFO: Pod "var-expansion-5f91ba94-2a36-4dc2-acd9-bf637c291950": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009610723s STEP: Saw pod success May 17 13:15:56.500: INFO: Pod "var-expansion-5f91ba94-2a36-4dc2-acd9-bf637c291950" satisfied condition "success or failure" May 17 13:15:56.502: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-5f91ba94-2a36-4dc2-acd9-bf637c291950 container dapi-container: STEP: delete the pod May 17 13:15:56.522: INFO: Waiting for pod var-expansion-5f91ba94-2a36-4dc2-acd9-bf637c291950 to disappear May 17 13:15:56.526: INFO: Pod var-expansion-5f91ba94-2a36-4dc2-acd9-bf637c291950 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:15:56.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4404" for this suite. May 17 13:16:02.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:16:02.608: INFO: namespace var-expansion-4404 deletion completed in 6.079690488s • [SLOW TEST:10.239 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:16:02.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 17 13:16:02.689: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:02.732: INFO: Number of nodes with available pods: 0 May 17 13:16:02.732: INFO: Node iruya-worker is running more than one daemon pod May 17 13:16:03.738: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:03.742: INFO: Number of nodes with available pods: 0 May 17 13:16:03.742: INFO: Node iruya-worker is running more than one daemon pod May 17 13:16:04.738: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:04.742: INFO: Number of nodes with available pods: 0 May 17 13:16:04.742: INFO: Node iruya-worker is running more than one daemon pod May 17 13:16:05.738: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:05.741: INFO: Number of nodes with available pods: 0 May 17 13:16:05.741: INFO: Node iruya-worker is running more than one daemon pod May 17 13:16:06.739: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:06.742: INFO: Number of nodes with available pods: 1 May 17 13:16:06.742: INFO: Node iruya-worker is running more than one daemon pod May 17 13:16:07.737: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:07.739: INFO: Number of nodes with available pods: 2 May 17 13:16:07.739: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 17 13:16:07.754: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:07.756: INFO: Number of nodes with available pods: 1 May 17 13:16:07.756: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:08.762: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:08.765: INFO: Number of nodes with available pods: 1 May 17 13:16:08.765: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:09.760: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:09.764: INFO: Number of nodes with available pods: 1 May 17 13:16:09.764: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:10.762: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:10.764: INFO: Number of nodes with available pods: 1 May 17 13:16:10.765: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:11.762: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:11.765: INFO: Number of nodes with available pods: 1 May 17 13:16:11.765: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:12.765: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:12.768: INFO: Number of nodes with available pods: 1 May 17 13:16:12.768: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:13.762: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:13.766: INFO: Number of nodes with available pods: 1 May 17 13:16:13.766: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:14.762: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:14.765: INFO: Number of nodes with available pods: 1 May 17 13:16:14.765: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:15.761: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:15.764: INFO: Number of nodes with available pods: 1 May 17 13:16:15.764: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:16.762: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:16.764: INFO: Number of nodes with available pods: 1 May 17 13:16:16.764: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:17.762: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:17.765: INFO: Number of nodes with available pods: 1 May 17 13:16:17.765: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:18.762: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:18.769: INFO: Number of nodes with available pods: 1 May 17 13:16:18.769: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:19.762: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:19.764: INFO: Number of nodes with available pods: 1 May 17 13:16:19.765: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:20.761: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:20.765: INFO: Number of nodes with available pods: 1 May 17 13:16:20.765: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:21.761: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:21.765: INFO: Number of nodes with available pods: 1 May 17 13:16:21.766: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:22.765: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:22.769: INFO: Number of nodes with available pods: 1 May 17 13:16:22.769: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:23.762: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:23.765: INFO: Number of nodes with available pods: 1 May 17 13:16:23.765: INFO: Node iruya-worker2 is running more than one daemon pod May 17 13:16:24.762: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:16:24.765: INFO: Number of nodes with available pods: 2 May 17 13:16:24.765: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5140, will wait for the garbage collector to delete the pods May 17 13:16:24.835: INFO: Deleting DaemonSet.extensions daemon-set took: 12.659493ms May 17 13:16:25.135: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.268051ms May 17 13:16:32.238: INFO: Number of nodes with available pods: 0 May 17 13:16:32.238: INFO: Number of running nodes: 0, number of available pods: 0 May 17 13:16:32.242: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5140/daemonsets","resourceVersion":"11397165"},"items":null} May 17 13:16:32.244: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5140/pods","resourceVersion":"11397165"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:16:32.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5140" for this suite. May 17 13:16:38.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:16:38.348: INFO: namespace daemonsets-5140 deletion completed in 6.093325936s • [SLOW TEST:35.740 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:16:38.349: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-024de3ed-839f-4e9e-95a4-98e163936ddd STEP: Creating secret with name s-test-opt-upd-34b18397-2c94-466d-ab39-4971b7f152cc STEP: Creating the pod STEP: Deleting secret s-test-opt-del-024de3ed-839f-4e9e-95a4-98e163936ddd STEP: Updating secret s-test-opt-upd-34b18397-2c94-466d-ab39-4971b7f152cc STEP: Creating secret with name s-test-opt-create-c07a3bee-f98a-4b72-bd21-d0d3733d3780 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:16:48.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6739" for this suite. May 17 13:17:10.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:17:10.645: INFO: namespace projected-6739 deletion completed in 22.103701396s • [SLOW TEST:32.295 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:17:10.645: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 13:17:10.731: INFO: (0) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 25.10859ms) May 17 13:17:10.734: INFO: (1) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.019564ms) May 17 13:17:10.736: INFO: (2) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.842104ms) May 17 13:17:10.740: INFO: (3) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.229226ms) May 17 13:17:10.743: INFO: (4) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.953616ms) May 17 13:17:10.745: INFO: (5) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.651102ms) May 17 13:17:10.748: INFO: (6) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.259898ms) May 17 13:17:10.750: INFO: (7) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.610326ms) May 17 13:17:10.754: INFO: (8) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.258169ms) May 17 13:17:10.756: INFO: (9) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.569408ms) May 17 13:17:10.760: INFO: (10) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.302369ms) May 17 13:17:10.763: INFO: (11) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.98679ms) May 17 13:17:10.766: INFO: (12) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.943166ms) May 17 13:17:10.768: INFO: (13) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.498305ms) May 17 13:17:10.771: INFO: (14) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.160125ms) May 17 13:17:10.774: INFO: (15) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.986166ms) May 17 13:17:10.777: INFO: (16) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.063834ms) May 17 13:17:10.780: INFO: (17) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 2.706315ms) May 17 13:17:10.784: INFO: (18) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.400984ms) May 17 13:17:10.787: INFO: (19) /api/v1/nodes/iruya-worker/proxy/logs/:
containers/
pods/
(200; 3.19626ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:17:10.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-5067" for this suite. May 17 13:17:16.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:17:16.887: INFO: namespace proxy-5067 deletion completed in 6.096971037s • [SLOW TEST:6.242 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:17:16.888: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 13:17:16.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 17 13:17:17.122: INFO: stderr: "" May 17 13:17:17.123: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.11\", GitCommit:\"d94a81c724ea8e1ccc9002d89b7fe81d58f89ede\", GitTreeState:\"clean\", BuildDate:\"2020-05-02T15:37:43Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2020-01-14T00:28:37Z\", GoVersion:\"go1.12.12\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:17:17.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7712" for this suite. May 17 13:17:23.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:17:23.264: INFO: namespace kubectl-7712 deletion completed in 6.136000158s • [SLOW TEST:6.376 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl version /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:17:23.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 17 13:17:23.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-7343' May 17 13:17:23.422: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 17 13:17:23.422: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the deployment e2e-test-nginx-deployment was created STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created [AfterEach] [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562 May 17 13:17:25.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7343' May 17 13:17:26.176: INFO: stderr: "" May 17 13:17:26.176: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:17:26.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7343" for this suite. May 17 13:17:32.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:17:32.559: INFO: namespace kubectl-7343 deletion completed in 6.096849701s • [SLOW TEST:9.295 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:17:32.560: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:17:32.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7144" for this suite. May 17 13:17:54.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:17:54.777: INFO: namespace pods-7144 deletion completed in 22.103254632s • [SLOW TEST:22.217 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:17:54.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 17 13:17:58.933: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:17:59.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-7602" for this suite. May 17 13:18:05.068: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:18:05.147: INFO: namespace container-runtime-7602 deletion completed in 6.091414472s • [SLOW TEST:10.369 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:18:05.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:18:11.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2172" for this suite. May 17 13:18:53.312: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:18:53.420: INFO: namespace kubelet-test-2172 deletion completed in 42.123411358s • [SLOW TEST:48.273 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox Pod with hostAliases /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136 should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:18:53.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's args May 17 13:18:53.481: INFO: Waiting up to 5m0s for pod "var-expansion-4b3ce410-3647-4e64-950a-517f720a7d1e" in namespace "var-expansion-3320" to be "success or failure" May 17 13:18:53.485: INFO: Pod "var-expansion-4b3ce410-3647-4e64-950a-517f720a7d1e": Phase="Pending", Reason="", readiness=false. Elapsed: 3.64232ms May 17 13:18:55.489: INFO: Pod "var-expansion-4b3ce410-3647-4e64-950a-517f720a7d1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007426527s May 17 13:18:57.492: INFO: Pod "var-expansion-4b3ce410-3647-4e64-950a-517f720a7d1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011256214s STEP: Saw pod success May 17 13:18:57.492: INFO: Pod "var-expansion-4b3ce410-3647-4e64-950a-517f720a7d1e" satisfied condition "success or failure" May 17 13:18:57.495: INFO: Trying to get logs from node iruya-worker2 pod var-expansion-4b3ce410-3647-4e64-950a-517f720a7d1e container dapi-container: STEP: delete the pod May 17 13:18:57.531: INFO: Waiting for pod var-expansion-4b3ce410-3647-4e64-950a-517f720a7d1e to disappear May 17 13:18:57.555: INFO: Pod var-expansion-4b3ce410-3647-4e64-950a-517f720a7d1e no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:18:57.555: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-3320" for this suite. May 17 13:19:03.577: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:19:03.647: INFO: namespace var-expansion-3320 deletion completed in 6.08774274s • [SLOW TEST:10.226 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:19:03.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 13:19:03.712: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e448623d-a43e-4d57-99c4-8975a3e58657" in namespace "downward-api-8864" to be "success or failure" May 17 13:19:03.726: INFO: Pod "downwardapi-volume-e448623d-a43e-4d57-99c4-8975a3e58657": Phase="Pending", Reason="", readiness=false. Elapsed: 14.355236ms May 17 13:19:05.730: INFO: Pod "downwardapi-volume-e448623d-a43e-4d57-99c4-8975a3e58657": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018051747s May 17 13:19:07.734: INFO: Pod "downwardapi-volume-e448623d-a43e-4d57-99c4-8975a3e58657": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022055209s STEP: Saw pod success May 17 13:19:07.734: INFO: Pod "downwardapi-volume-e448623d-a43e-4d57-99c4-8975a3e58657" satisfied condition "success or failure" May 17 13:19:07.737: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-e448623d-a43e-4d57-99c4-8975a3e58657 container client-container: STEP: delete the pod May 17 13:19:07.933: INFO: Waiting for pod downwardapi-volume-e448623d-a43e-4d57-99c4-8975a3e58657 to disappear May 17 13:19:07.956: INFO: Pod downwardapi-volume-e448623d-a43e-4d57-99c4-8975a3e58657 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:19:07.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8864" for this suite. May 17 13:19:13.989: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:19:14.064: INFO: namespace downward-api-8864 deletion completed in 6.104301531s • [SLOW TEST:10.418 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:19:14.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-9a7bdf3f-67c9-4f1f-9512-806f054c9760 STEP: Creating a pod to test consume secrets May 17 13:19:14.161: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-780a0563-3979-43ef-ba82-1f9f502bcaad" in namespace "projected-6657" to be "success or failure" May 17 13:19:14.165: INFO: Pod "pod-projected-secrets-780a0563-3979-43ef-ba82-1f9f502bcaad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.365466ms May 17 13:19:16.170: INFO: Pod "pod-projected-secrets-780a0563-3979-43ef-ba82-1f9f502bcaad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008584347s May 17 13:19:18.173: INFO: Pod "pod-projected-secrets-780a0563-3979-43ef-ba82-1f9f502bcaad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011591197s STEP: Saw pod success May 17 13:19:18.173: INFO: Pod "pod-projected-secrets-780a0563-3979-43ef-ba82-1f9f502bcaad" satisfied condition "success or failure" May 17 13:19:18.175: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-780a0563-3979-43ef-ba82-1f9f502bcaad container projected-secret-volume-test: STEP: delete the pod May 17 13:19:18.227: INFO: Waiting for pod pod-projected-secrets-780a0563-3979-43ef-ba82-1f9f502bcaad to disappear May 17 13:19:18.231: INFO: Pod pod-projected-secrets-780a0563-3979-43ef-ba82-1f9f502bcaad no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:19:18.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6657" for this suite. May 17 13:19:24.279: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:19:24.359: INFO: namespace projected-6657 deletion completed in 6.12505816s • [SLOW TEST:10.294 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:19:24.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 17 13:19:24.507: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4808,SelfLink:/api/v1/namespaces/watch-4808/configmaps/e2e-watch-test-resource-version,UID:51c9bda4-91d2-4f89-824d-f794b2d0b44a,ResourceVersion:11397756,Generation:0,CreationTimestamp:2020-05-17 13:19:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 17 13:19:24.507: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-4808,SelfLink:/api/v1/namespaces/watch-4808/configmaps/e2e-watch-test-resource-version,UID:51c9bda4-91d2-4f89-824d-f794b2d0b44a,ResourceVersion:11397757,Generation:0,CreationTimestamp:2020-05-17 13:19:24 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:19:24.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4808" for this suite. May 17 13:19:30.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:19:30.620: INFO: namespace watch-4808 deletion completed in 6.092134309s • [SLOW TEST:6.260 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:19:30.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 17 13:19:30.690: INFO: Waiting up to 5m0s for pod "downward-api-2ceb6875-f29a-4020-af3d-ea81bd474840" in namespace "downward-api-1028" to be "success or failure" May 17 13:19:30.695: INFO: Pod "downward-api-2ceb6875-f29a-4020-af3d-ea81bd474840": Phase="Pending", Reason="", readiness=false. Elapsed: 4.365683ms May 17 13:19:32.699: INFO: Pod "downward-api-2ceb6875-f29a-4020-af3d-ea81bd474840": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008564111s May 17 13:19:34.703: INFO: Pod "downward-api-2ceb6875-f29a-4020-af3d-ea81bd474840": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012863354s STEP: Saw pod success May 17 13:19:34.703: INFO: Pod "downward-api-2ceb6875-f29a-4020-af3d-ea81bd474840" satisfied condition "success or failure" May 17 13:19:34.707: INFO: Trying to get logs from node iruya-worker pod downward-api-2ceb6875-f29a-4020-af3d-ea81bd474840 container dapi-container: STEP: delete the pod May 17 13:19:34.815: INFO: Waiting for pod downward-api-2ceb6875-f29a-4020-af3d-ea81bd474840 to disappear May 17 13:19:34.837: INFO: Pod downward-api-2ceb6875-f29a-4020-af3d-ea81bd474840 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:19:34.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1028" for this suite. May 17 13:19:40.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:19:40.990: INFO: namespace downward-api-1028 deletion completed in 6.15040375s • [SLOW TEST:10.370 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:19:40.991: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 17 13:19:41.066: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8832' May 17 13:19:41.182: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 17 13:19:41.182: INFO: stdout: "job.batch/e2e-test-nginx-job created\n" STEP: verifying the job e2e-test-nginx-job was created [AfterEach] [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617 May 17 13:19:41.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8832' May 17 13:19:41.302: INFO: stderr: "" May 17 13:19:41.302: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:19:41.302: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8832" for this suite. May 17 13:20:03.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:20:03.385: INFO: namespace kubectl-8832 deletion completed in 22.079534619s • [SLOW TEST:22.395 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:20:03.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 17 13:20:03.474: INFO: Waiting up to 5m0s for pod "pod-25375dc1-e52b-4d9f-b2c8-9e8180e0e9bd" in namespace "emptydir-4308" to be "success or failure" May 17 13:20:03.491: INFO: Pod "pod-25375dc1-e52b-4d9f-b2c8-9e8180e0e9bd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.872477ms May 17 13:20:05.495: INFO: Pod "pod-25375dc1-e52b-4d9f-b2c8-9e8180e0e9bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021824955s May 17 13:20:07.499: INFO: Pod "pod-25375dc1-e52b-4d9f-b2c8-9e8180e0e9bd": Phase="Running", Reason="", readiness=true. Elapsed: 4.024986634s May 17 13:20:09.503: INFO: Pod "pod-25375dc1-e52b-4d9f-b2c8-9e8180e0e9bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.029121945s STEP: Saw pod success May 17 13:20:09.503: INFO: Pod "pod-25375dc1-e52b-4d9f-b2c8-9e8180e0e9bd" satisfied condition "success or failure" May 17 13:20:09.506: INFO: Trying to get logs from node iruya-worker pod pod-25375dc1-e52b-4d9f-b2c8-9e8180e0e9bd container test-container: STEP: delete the pod May 17 13:20:09.523: INFO: Waiting for pod pod-25375dc1-e52b-4d9f-b2c8-9e8180e0e9bd to disappear May 17 13:20:09.527: INFO: Pod pod-25375dc1-e52b-4d9f-b2c8-9e8180e0e9bd no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:20:09.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4308" for this suite. May 17 13:20:15.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:20:15.638: INFO: namespace emptydir-4308 deletion completed in 6.108430154s • [SLOW TEST:12.253 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:20:15.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-6516/configmap-test-d696b0d9-acf3-4a1a-98a9-32678c823fbd STEP: Creating a pod to test consume configMaps May 17 13:20:15.745: INFO: Waiting up to 5m0s for pod "pod-configmaps-165b645a-efc2-49d1-9433-362fb864af4c" in namespace "configmap-6516" to be "success or failure" May 17 13:20:15.749: INFO: Pod "pod-configmaps-165b645a-efc2-49d1-9433-362fb864af4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.201503ms May 17 13:20:17.754: INFO: Pod "pod-configmaps-165b645a-efc2-49d1-9433-362fb864af4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008696021s May 17 13:20:19.759: INFO: Pod "pod-configmaps-165b645a-efc2-49d1-9433-362fb864af4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013555471s STEP: Saw pod success May 17 13:20:19.759: INFO: Pod "pod-configmaps-165b645a-efc2-49d1-9433-362fb864af4c" satisfied condition "success or failure" May 17 13:20:19.762: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-165b645a-efc2-49d1-9433-362fb864af4c container env-test: STEP: delete the pod May 17 13:20:19.805: INFO: Waiting for pod pod-configmaps-165b645a-efc2-49d1-9433-362fb864af4c to disappear May 17 13:20:19.822: INFO: Pod pod-configmaps-165b645a-efc2-49d1-9433-362fb864af4c no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:20:19.822: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6516" for this suite. May 17 13:20:25.853: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:20:25.919: INFO: namespace configmap-6516 deletion completed in 6.095126056s • [SLOW TEST:10.280 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:20:25.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:20:30.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2034" for this suite. May 17 13:21:16.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:21:16.122: INFO: namespace kubelet-test-2034 deletion completed in 46.085863896s • [SLOW TEST:50.202 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:21:16.123: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap that has name configmap-test-emptyKey-d270af10-3c58-40ac-ac13-d3664c44dae9 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:21:16.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8839" for this suite. May 17 13:21:22.211: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:21:22.291: INFO: namespace configmap-8839 deletion completed in 6.091551768s • [SLOW TEST:6.169 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:21:22.292: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-2b510891-459b-46bb-a9b4-d78190537a30 STEP: Creating configMap with name cm-test-opt-upd-f5d32d21-e24e-4b92-a532-0d5b350233c6 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-2b510891-459b-46bb-a9b4-d78190537a30 STEP: Updating configmap cm-test-opt-upd-f5d32d21-e24e-4b92-a532-0d5b350233c6 STEP: Creating configMap with name cm-test-opt-create-58f59871-8d71-4a22-b1db-26c2d152b14e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:21:32.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8522" for this suite. May 17 13:21:54.538: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:21:54.616: INFO: namespace configmap-8522 deletion completed in 22.131226546s • [SLOW TEST:32.324 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:21:54.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 17 13:22:01.261: INFO: Successfully updated pod "annotationupdate08d7b6eb-7b07-421e-9a05-38c72c306d95" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:22:03.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6297" for this suite. May 17 13:22:25.311: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:22:25.395: INFO: namespace projected-6297 deletion completed in 22.105255722s • [SLOW TEST:30.779 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:22:25.395: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test use defaults May 17 13:22:25.484: INFO: Waiting up to 5m0s for pod "client-containers-c5ca91a2-9fc9-44b7-ad2e-165b675192a9" in namespace "containers-8545" to be "success or failure" May 17 13:22:25.487: INFO: Pod "client-containers-c5ca91a2-9fc9-44b7-ad2e-165b675192a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.724057ms May 17 13:22:27.490: INFO: Pod "client-containers-c5ca91a2-9fc9-44b7-ad2e-165b675192a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00657457s May 17 13:22:29.495: INFO: Pod "client-containers-c5ca91a2-9fc9-44b7-ad2e-165b675192a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010801545s STEP: Saw pod success May 17 13:22:29.495: INFO: Pod "client-containers-c5ca91a2-9fc9-44b7-ad2e-165b675192a9" satisfied condition "success or failure" May 17 13:22:29.498: INFO: Trying to get logs from node iruya-worker2 pod client-containers-c5ca91a2-9fc9-44b7-ad2e-165b675192a9 container test-container: STEP: delete the pod May 17 13:22:29.635: INFO: Waiting for pod client-containers-c5ca91a2-9fc9-44b7-ad2e-165b675192a9 to disappear May 17 13:22:29.642: INFO: Pod client-containers-c5ca91a2-9fc9-44b7-ad2e-165b675192a9 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:22:29.642: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8545" for this suite. May 17 13:22:35.658: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:22:35.763: INFO: namespace containers-8545 deletion completed in 6.117589341s • [SLOW TEST:10.368 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:22:35.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 17 13:22:35.832: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7110,SelfLink:/api/v1/namespaces/watch-7110/configmaps/e2e-watch-test-label-changed,UID:8fc7be6e-a3b8-4db4-b0c9-3707ed732863,ResourceVersion:11398362,Generation:0,CreationTimestamp:2020-05-17 13:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 17 13:22:35.832: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7110,SelfLink:/api/v1/namespaces/watch-7110/configmaps/e2e-watch-test-label-changed,UID:8fc7be6e-a3b8-4db4-b0c9-3707ed732863,ResourceVersion:11398363,Generation:0,CreationTimestamp:2020-05-17 13:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 17 13:22:35.832: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7110,SelfLink:/api/v1/namespaces/watch-7110/configmaps/e2e-watch-test-label-changed,UID:8fc7be6e-a3b8-4db4-b0c9-3707ed732863,ResourceVersion:11398364,Generation:0,CreationTimestamp:2020-05-17 13:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 17 13:22:45.872: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7110,SelfLink:/api/v1/namespaces/watch-7110/configmaps/e2e-watch-test-label-changed,UID:8fc7be6e-a3b8-4db4-b0c9-3707ed732863,ResourceVersion:11398386,Generation:0,CreationTimestamp:2020-05-17 13:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 17 13:22:45.872: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7110,SelfLink:/api/v1/namespaces/watch-7110/configmaps/e2e-watch-test-label-changed,UID:8fc7be6e-a3b8-4db4-b0c9-3707ed732863,ResourceVersion:11398387,Generation:0,CreationTimestamp:2020-05-17 13:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 17 13:22:45.872: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-7110,SelfLink:/api/v1/namespaces/watch-7110/configmaps/e2e-watch-test-label-changed,UID:8fc7be6e-a3b8-4db4-b0c9-3707ed732863,ResourceVersion:11398388,Generation:0,CreationTimestamp:2020-05-17 13:22:35 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:22:45.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7110" for this suite. May 17 13:22:51.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:22:51.970: INFO: namespace watch-7110 deletion completed in 6.09276848s • [SLOW TEST:16.206 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:22:51.970: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-1d9becd6-30ed-4776-9f29-161534d599c2 STEP: Creating a pod to test consume configMaps May 17 13:22:52.071: INFO: Waiting up to 5m0s for pod "pod-configmaps-06c1239f-b036-4efb-8f82-22987a19f944" in namespace "configmap-4215" to be "success or failure" May 17 13:22:52.089: INFO: Pod "pod-configmaps-06c1239f-b036-4efb-8f82-22987a19f944": Phase="Pending", Reason="", readiness=false. Elapsed: 17.817335ms May 17 13:22:54.110: INFO: Pod "pod-configmaps-06c1239f-b036-4efb-8f82-22987a19f944": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039037702s May 17 13:22:56.115: INFO: Pod "pod-configmaps-06c1239f-b036-4efb-8f82-22987a19f944": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.043715072s STEP: Saw pod success May 17 13:22:56.115: INFO: Pod "pod-configmaps-06c1239f-b036-4efb-8f82-22987a19f944" satisfied condition "success or failure" May 17 13:22:56.118: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-06c1239f-b036-4efb-8f82-22987a19f944 container configmap-volume-test: STEP: delete the pod May 17 13:22:56.247: INFO: Waiting for pod pod-configmaps-06c1239f-b036-4efb-8f82-22987a19f944 to disappear May 17 13:22:56.303: INFO: Pod pod-configmaps-06c1239f-b036-4efb-8f82-22987a19f944 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:22:56.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4215" for this suite. May 17 13:23:02.360: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:23:02.480: INFO: namespace configmap-4215 deletion completed in 6.138332292s • [SLOW TEST:10.510 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:23:02.481: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-5f15374a-1c6b-43d4-8856-14bab51c6e7a STEP: Creating a pod to test consume configMaps May 17 13:23:02.569: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-990998b7-7aba-416b-aeea-7382e9ef5ea3" in namespace "projected-6858" to be "success or failure" May 17 13:23:02.572: INFO: Pod "pod-projected-configmaps-990998b7-7aba-416b-aeea-7382e9ef5ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.358196ms May 17 13:23:04.577: INFO: Pod "pod-projected-configmaps-990998b7-7aba-416b-aeea-7382e9ef5ea3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008073166s May 17 13:23:06.656: INFO: Pod "pod-projected-configmaps-990998b7-7aba-416b-aeea-7382e9ef5ea3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086758945s STEP: Saw pod success May 17 13:23:06.656: INFO: Pod "pod-projected-configmaps-990998b7-7aba-416b-aeea-7382e9ef5ea3" satisfied condition "success or failure" May 17 13:23:06.659: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-990998b7-7aba-416b-aeea-7382e9ef5ea3 container projected-configmap-volume-test: STEP: delete the pod May 17 13:23:06.711: INFO: Waiting for pod pod-projected-configmaps-990998b7-7aba-416b-aeea-7382e9ef5ea3 to disappear May 17 13:23:06.734: INFO: Pod pod-projected-configmaps-990998b7-7aba-416b-aeea-7382e9ef5ea3 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:23:06.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6858" for this suite. May 17 13:23:12.756: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:23:12.824: INFO: namespace projected-6858 deletion completed in 6.086396269s • [SLOW TEST:10.344 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:23:12.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: executing a command with run --rm and attach with stdin May 17 13:23:12.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-3954 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 17 13:23:19.085: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0517 13:23:18.990374 622 log.go:172] (0xc000140d10) (0xc000b34140) Create stream\nI0517 13:23:18.990404 622 log.go:172] (0xc000140d10) (0xc000b34140) Stream added, broadcasting: 1\nI0517 13:23:18.993925 622 log.go:172] (0xc000140d10) Reply frame received for 1\nI0517 13:23:18.993974 622 log.go:172] (0xc000140d10) (0xc0004fda40) Create stream\nI0517 13:23:18.993990 622 log.go:172] (0xc000140d10) (0xc0004fda40) Stream added, broadcasting: 3\nI0517 13:23:18.995005 622 log.go:172] (0xc000140d10) Reply frame received for 3\nI0517 13:23:18.995035 622 log.go:172] (0xc000140d10) (0xc00076a0a0) Create stream\nI0517 13:23:18.995043 622 log.go:172] (0xc000140d10) (0xc00076a0a0) Stream added, broadcasting: 5\nI0517 13:23:18.995936 622 log.go:172] (0xc000140d10) Reply frame received for 5\nI0517 13:23:18.995969 622 log.go:172] (0xc000140d10) (0xc0006ce1e0) Create stream\nI0517 13:23:18.995986 622 log.go:172] (0xc000140d10) (0xc0006ce1e0) Stream added, broadcasting: 7\nI0517 13:23:18.996937 622 log.go:172] (0xc000140d10) Reply frame received for 7\nI0517 13:23:18.997084 622 log.go:172] (0xc0004fda40) (3) Writing data frame\nI0517 13:23:18.997478 622 log.go:172] (0xc0004fda40) (3) Writing data frame\nI0517 13:23:18.998267 622 log.go:172] (0xc000140d10) Data frame received for 5\nI0517 13:23:18.998292 622 log.go:172] (0xc00076a0a0) (5) Data frame handling\nI0517 13:23:18.998312 622 log.go:172] (0xc00076a0a0) (5) Data frame sent\nI0517 13:23:18.998867 622 log.go:172] (0xc000140d10) Data frame received for 5\nI0517 13:23:18.998880 622 log.go:172] (0xc00076a0a0) (5) Data frame handling\nI0517 13:23:18.998889 622 log.go:172] (0xc00076a0a0) (5) Data frame sent\nI0517 13:23:19.043010 622 log.go:172] (0xc000140d10) Data frame received for 7\nI0517 13:23:19.043051 622 log.go:172] (0xc0006ce1e0) (7) Data frame handling\nI0517 13:23:19.043085 622 log.go:172] (0xc000140d10) Data frame received for 5\nI0517 13:23:19.043120 622 log.go:172] (0xc00076a0a0) (5) Data frame handling\nI0517 13:23:19.043232 622 log.go:172] (0xc000140d10) Data frame received for 1\nI0517 13:23:19.043261 622 log.go:172] (0xc000b34140) (1) Data frame handling\nI0517 13:23:19.043300 622 log.go:172] (0xc000b34140) (1) Data frame sent\nI0517 13:23:19.043467 622 log.go:172] (0xc000140d10) (0xc000b34140) Stream removed, broadcasting: 1\nI0517 13:23:19.043635 622 log.go:172] (0xc000140d10) (0xc0004fda40) Stream removed, broadcasting: 3\nI0517 13:23:19.043681 622 log.go:172] (0xc000140d10) Go away received\nI0517 13:23:19.043727 622 log.go:172] (0xc000140d10) (0xc000b34140) Stream removed, broadcasting: 1\nI0517 13:23:19.043760 622 log.go:172] (0xc000140d10) (0xc0004fda40) Stream removed, broadcasting: 3\nI0517 13:23:19.043780 622 log.go:172] (0xc000140d10) (0xc00076a0a0) Stream removed, broadcasting: 5\nI0517 13:23:19.043799 622 log.go:172] (0xc000140d10) (0xc0006ce1e0) Stream removed, broadcasting: 7\n" May 17 13:23:19.085: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:23:21.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3954" for this suite. May 17 13:23:33.144: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:23:33.216: INFO: namespace kubectl-3954 deletion completed in 12.120002787s • [SLOW TEST:20.391 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:23:33.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 17 13:23:33.279: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:23:39.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8718" for this suite. May 17 13:23:45.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:23:45.477: INFO: namespace init-container-8718 deletion completed in 6.09788108s • [SLOW TEST:12.261 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:23:45.478: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 17 13:23:53.592: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 17 13:23:53.610: INFO: Pod pod-with-prestop-http-hook still exists May 17 13:23:55.610: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 17 13:23:55.614: INFO: Pod pod-with-prestop-http-hook still exists May 17 13:23:57.610: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 17 13:23:57.613: INFO: Pod pod-with-prestop-http-hook still exists May 17 13:23:59.610: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 17 13:23:59.614: INFO: Pod pod-with-prestop-http-hook still exists May 17 13:24:01.610: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 17 13:24:01.614: INFO: Pod pod-with-prestop-http-hook still exists May 17 13:24:03.610: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 17 13:24:03.615: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:24:03.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3699" for this suite. May 17 13:24:27.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:24:27.716: INFO: namespace container-lifecycle-hook-3699 deletion completed in 24.089845321s • [SLOW TEST:42.239 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:24:27.716: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-d279459c-b9b5-4e91-8f17-7dc045e025ee STEP: Creating a pod to test consume secrets May 17 13:24:27.791: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f0bda6c7-63db-4ff2-a52b-2db7fd8b4f97" in namespace "projected-9819" to be "success or failure" May 17 13:24:27.795: INFO: Pod "pod-projected-secrets-f0bda6c7-63db-4ff2-a52b-2db7fd8b4f97": Phase="Pending", Reason="", readiness=false. Elapsed: 3.78861ms May 17 13:24:29.799: INFO: Pod "pod-projected-secrets-f0bda6c7-63db-4ff2-a52b-2db7fd8b4f97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008136482s May 17 13:24:31.842: INFO: Pod "pod-projected-secrets-f0bda6c7-63db-4ff2-a52b-2db7fd8b4f97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051349739s STEP: Saw pod success May 17 13:24:31.842: INFO: Pod "pod-projected-secrets-f0bda6c7-63db-4ff2-a52b-2db7fd8b4f97" satisfied condition "success or failure" May 17 13:24:31.846: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-f0bda6c7-63db-4ff2-a52b-2db7fd8b4f97 container projected-secret-volume-test: STEP: delete the pod May 17 13:24:31.898: INFO: Waiting for pod pod-projected-secrets-f0bda6c7-63db-4ff2-a52b-2db7fd8b4f97 to disappear May 17 13:24:31.915: INFO: Pod pod-projected-secrets-f0bda6c7-63db-4ff2-a52b-2db7fd8b4f97 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:24:31.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9819" for this suite. May 17 13:24:37.930: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:24:38.008: INFO: namespace projected-9819 deletion completed in 6.08887811s • [SLOW TEST:10.291 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:24:38.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-a2704447-1eec-4c56-b249-8e0b6a7bbe6e STEP: Creating a pod to test consume secrets May 17 13:24:38.117: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ba555671-9194-46f3-83a0-7d0088dd66fa" in namespace "projected-6775" to be "success or failure" May 17 13:24:38.119: INFO: Pod "pod-projected-secrets-ba555671-9194-46f3-83a0-7d0088dd66fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.108559ms May 17 13:24:40.124: INFO: Pod "pod-projected-secrets-ba555671-9194-46f3-83a0-7d0088dd66fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006786841s May 17 13:24:42.129: INFO: Pod "pod-projected-secrets-ba555671-9194-46f3-83a0-7d0088dd66fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011499251s STEP: Saw pod success May 17 13:24:42.129: INFO: Pod "pod-projected-secrets-ba555671-9194-46f3-83a0-7d0088dd66fa" satisfied condition "success or failure" May 17 13:24:42.132: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-ba555671-9194-46f3-83a0-7d0088dd66fa container projected-secret-volume-test: STEP: delete the pod May 17 13:24:42.150: INFO: Waiting for pod pod-projected-secrets-ba555671-9194-46f3-83a0-7d0088dd66fa to disappear May 17 13:24:42.154: INFO: Pod pod-projected-secrets-ba555671-9194-46f3-83a0-7d0088dd66fa no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:24:42.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6775" for this suite. May 17 13:24:48.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:24:48.251: INFO: namespace projected-6775 deletion completed in 6.093836003s • [SLOW TEST:10.244 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:24:48.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292 STEP: creating an rc May 17 13:24:48.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2372' May 17 13:24:48.628: INFO: stderr: "" May 17 13:24:48.628: INFO: stdout: "replicationcontroller/redis-master created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Waiting for Redis master to start. May 17 13:24:49.634: INFO: Selector matched 1 pods for map[app:redis] May 17 13:24:49.634: INFO: Found 0 / 1 May 17 13:24:50.693: INFO: Selector matched 1 pods for map[app:redis] May 17 13:24:50.693: INFO: Found 0 / 1 May 17 13:24:51.638: INFO: Selector matched 1 pods for map[app:redis] May 17 13:24:51.638: INFO: Found 0 / 1 May 17 13:24:52.633: INFO: Selector matched 1 pods for map[app:redis] May 17 13:24:52.634: INFO: Found 1 / 1 May 17 13:24:52.634: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 17 13:24:52.637: INFO: Selector matched 1 pods for map[app:redis] May 17 13:24:52.637: INFO: ForEach: Found 1 pods from the filter. Now looping through them. STEP: checking for a matching strings May 17 13:24:52.637: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7j2g2 redis-master --namespace=kubectl-2372' May 17 13:24:52.752: INFO: stderr: "" May 17 13:24:52.752: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 17 May 13:24:51.782 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 May 13:24:51.782 # Server started, Redis version 3.2.12\n1:M 17 May 13:24:51.782 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 May 13:24:51.782 * The server is now ready to accept connections on port 6379\n" STEP: limiting log lines May 17 13:24:52.753: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7j2g2 redis-master --namespace=kubectl-2372 --tail=1' May 17 13:24:52.875: INFO: stderr: "" May 17 13:24:52.875: INFO: stdout: "1:M 17 May 13:24:51.782 * The server is now ready to accept connections on port 6379\n" STEP: limiting log bytes May 17 13:24:52.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7j2g2 redis-master --namespace=kubectl-2372 --limit-bytes=1' May 17 13:24:52.982: INFO: stderr: "" May 17 13:24:52.982: INFO: stdout: " " STEP: exposing timestamps May 17 13:24:52.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7j2g2 redis-master --namespace=kubectl-2372 --tail=1 --timestamps' May 17 13:24:53.093: INFO: stderr: "" May 17 13:24:53.093: INFO: stdout: "2020-05-17T13:24:51.78245571Z 1:M 17 May 13:24:51.782 * The server is now ready to accept connections on port 6379\n" STEP: restricting to a time range May 17 13:24:55.593: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7j2g2 redis-master --namespace=kubectl-2372 --since=1s' May 17 13:24:55.718: INFO: stderr: "" May 17 13:24:55.719: INFO: stdout: "" May 17 13:24:55.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-7j2g2 redis-master --namespace=kubectl-2372 --since=24h' May 17 13:24:55.826: INFO: stderr: "" May 17 13:24:55.826: INFO: stdout: " _._ \n _.-``__ ''-._ \n _.-`` `. `_. ''-._ Redis 3.2.12 (35a5711f/0) 64 bit\n .-`` .-```. ```\\/ _.,_ ''-._ \n ( ' , .-` | `, ) Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'| Port: 6379\n | `-._ `._ / _.-' | PID: 1\n `-._ `-._ `-./ _.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | http://redis.io \n `-._ `-._`-.__.-'_.-' _.-' \n |`-._`-._ `-.__.-' _.-'_.-'| \n | `-._`-._ _.-'_.-' | \n `-._ `-._`-.__.-'_.-' _.-' \n `-._ `-.__.-' _.-' \n `-._ _.-' \n `-.__.-' \n\n1:M 17 May 13:24:51.782 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 May 13:24:51.782 # Server started, Redis version 3.2.12\n1:M 17 May 13:24:51.782 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 May 13:24:51.782 * The server is now ready to accept connections on port 6379\n" [AfterEach] [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298 STEP: using delete to clean up resources May 17 13:24:55.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2372' May 17 13:24:55.919: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 13:24:55.919: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n" May 17 13:24:55.919: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-2372' May 17 13:24:56.023: INFO: stderr: "No resources found.\n" May 17 13:24:56.023: INFO: stdout: "" May 17 13:24:56.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-2372 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 17 13:24:56.116: INFO: stderr: "" May 17 13:24:56.116: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:24:56.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2372" for this suite. May 17 13:25:20.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:25:20.208: INFO: namespace kubectl-2372 deletion completed in 24.088509534s • [SLOW TEST:31.956 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:25:20.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on node default medium May 17 13:25:20.360: INFO: Waiting up to 5m0s for pod "pod-4076de45-64e2-4535-8011-c50fbe4e51db" in namespace "emptydir-4018" to be "success or failure" May 17 13:25:20.393: INFO: Pod "pod-4076de45-64e2-4535-8011-c50fbe4e51db": Phase="Pending", Reason="", readiness=false. Elapsed: 32.714441ms May 17 13:25:22.397: INFO: Pod "pod-4076de45-64e2-4535-8011-c50fbe4e51db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037640959s May 17 13:25:24.402: INFO: Pod "pod-4076de45-64e2-4535-8011-c50fbe4e51db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041908465s May 17 13:25:26.405: INFO: Pod "pod-4076de45-64e2-4535-8011-c50fbe4e51db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.045599087s STEP: Saw pod success May 17 13:25:26.405: INFO: Pod "pod-4076de45-64e2-4535-8011-c50fbe4e51db" satisfied condition "success or failure" May 17 13:25:26.408: INFO: Trying to get logs from node iruya-worker2 pod pod-4076de45-64e2-4535-8011-c50fbe4e51db container test-container: STEP: delete the pod May 17 13:25:26.450: INFO: Waiting for pod pod-4076de45-64e2-4535-8011-c50fbe4e51db to disappear May 17 13:25:26.491: INFO: Pod pod-4076de45-64e2-4535-8011-c50fbe4e51db no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:25:26.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4018" for this suite. May 17 13:25:32.559: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:25:32.628: INFO: namespace emptydir-4018 deletion completed in 6.132538836s • [SLOW TEST:12.420 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:25:32.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-hxlhr in namespace proxy-8123 I0517 13:25:32.821540 6 runners.go:180] Created replication controller with name: proxy-service-hxlhr, namespace: proxy-8123, replica count: 1 I0517 13:25:33.871999 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 13:25:34.872223 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 13:25:35.872391 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 13:25:36.872651 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 13:25:37.872856 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0517 13:25:38.873087 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0517 13:25:39.873508 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0517 13:25:40.873733 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0517 13:25:41.873963 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0517 13:25:42.874184 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0517 13:25:43.874399 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0517 13:25:44.874595 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0517 13:25:45.874815 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0517 13:25:46.874964 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0517 13:25:47.875180 6 runners.go:180] proxy-service-hxlhr Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 17 13:25:47.878: INFO: setup took 15.117721064s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 17 13:25:47.884: INFO: (0) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 5.833309ms) May 17 13:25:47.886: INFO: (0) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 7.545477ms) May 17 13:25:47.886: INFO: (0) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 7.902643ms) May 17 13:25:47.887: INFO: (0) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 8.463262ms) May 17 13:25:47.887: INFO: (0) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname1/proxy/: foo (200; 8.289993ms) May 17 13:25:47.887: INFO: (0) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 8.808984ms) May 17 13:25:47.887: INFO: (0) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 8.697698ms) May 17 13:25:47.887: INFO: (0) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 8.8533ms) May 17 13:25:47.887: INFO: (0) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:1080/proxy/: ... (200; 9.043758ms) May 17 13:25:47.888: INFO: (0) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck/proxy/: test (200; 9.21503ms) May 17 13:25:47.888: INFO: (0) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname2/proxy/: bar (200; 9.324862ms) May 17 13:25:47.973: INFO: (0) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: test (200; 4.12529ms) May 17 13:25:47.979: INFO: (1) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 4.091449ms) May 17 13:25:47.979: INFO: (1) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.159576ms) May 17 13:25:47.980: INFO: (1) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:460/proxy/: tls baz (200; 4.436238ms) May 17 13:25:47.980: INFO: (1) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname2/proxy/: bar (200; 4.964244ms) May 17 13:25:47.980: INFO: (1) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.904346ms) May 17 13:25:47.980: INFO: (1) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname1/proxy/: tls baz (200; 4.944425ms) May 17 13:25:47.980: INFO: (1) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 4.936532ms) May 17 13:25:47.980: INFO: (1) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:1080/proxy/: ... (200; 5.058578ms) May 17 13:25:47.980: INFO: (1) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname1/proxy/: foo (200; 5.006797ms) May 17 13:25:47.980: INFO: (1) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 5.106534ms) May 17 13:25:47.980: INFO: (1) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 4.997303ms) May 17 13:25:47.980: INFO: (1) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 5.142386ms) May 17 13:25:47.980: INFO: (1) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: test (200; 2.826369ms) May 17 13:25:47.984: INFO: (2) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: ... (200; 4.411791ms) May 17 13:25:47.985: INFO: (2) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname1/proxy/: tls baz (200; 4.630161ms) May 17 13:25:47.985: INFO: (2) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname1/proxy/: foo (200; 4.658947ms) May 17 13:25:47.985: INFO: (2) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:460/proxy/: tls baz (200; 4.699271ms) May 17 13:25:47.986: INFO: (2) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.904736ms) May 17 13:25:47.986: INFO: (2) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 5.178874ms) May 17 13:25:47.986: INFO: (2) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 4.983679ms) May 17 13:25:47.986: INFO: (2) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 5.122078ms) May 17 13:25:47.986: INFO: (2) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 5.391344ms) May 17 13:25:47.986: INFO: (2) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:462/proxy/: tls qux (200; 5.64082ms) May 17 13:25:47.986: INFO: (2) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname2/proxy/: tls qux (200; 5.716285ms) May 17 13:25:47.989: INFO: (3) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: test (200; 3.558247ms) May 17 13:25:47.990: INFO: (3) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 3.587564ms) May 17 13:25:47.990: INFO: (3) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:462/proxy/: tls qux (200; 3.675025ms) May 17 13:25:47.990: INFO: (3) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:1080/proxy/: ... (200; 3.726139ms) May 17 13:25:47.990: INFO: (3) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 3.735922ms) May 17 13:25:47.990: INFO: (3) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 3.83904ms) May 17 13:25:47.991: INFO: (3) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.191966ms) May 17 13:25:47.991: INFO: (3) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.231048ms) May 17 13:25:47.991: INFO: (3) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 4.127484ms) May 17 13:25:47.991: INFO: (3) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname1/proxy/: foo (200; 4.809978ms) May 17 13:25:47.991: INFO: (3) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname2/proxy/: bar (200; 4.955805ms) May 17 13:25:47.991: INFO: (3) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 4.994903ms) May 17 13:25:47.992: INFO: (3) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname1/proxy/: tls baz (200; 5.126716ms) May 17 13:25:47.992: INFO: (3) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname2/proxy/: tls qux (200; 5.216912ms) May 17 13:25:47.994: INFO: (4) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 2.032289ms) May 17 13:25:47.996: INFO: (4) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 4.69847ms) May 17 13:25:47.997: INFO: (4) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:1080/proxy/: ... (200; 4.889124ms) May 17 13:25:47.997: INFO: (4) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.953725ms) May 17 13:25:47.997: INFO: (4) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 5.000789ms) May 17 13:25:47.997: INFO: (4) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:460/proxy/: tls baz (200; 4.945455ms) May 17 13:25:47.997: INFO: (4) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 5.262797ms) May 17 13:25:47.997: INFO: (4) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck/proxy/: test (200; 5.257783ms) May 17 13:25:47.997: INFO: (4) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: test (200; 2.303338ms) May 17 13:25:48.001: INFO: (5) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 2.574355ms) May 17 13:25:48.002: INFO: (5) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 3.030198ms) May 17 13:25:48.002: INFO: (5) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:1080/proxy/: ... (200; 3.665799ms) May 17 13:25:48.003: INFO: (5) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 3.867663ms) May 17 13:25:48.003: INFO: (5) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 4.057253ms) May 17 13:25:48.003: INFO: (5) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:460/proxy/: tls baz (200; 4.299164ms) May 17 13:25:48.003: INFO: (5) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:462/proxy/: tls qux (200; 4.289882ms) May 17 13:25:48.003: INFO: (5) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: ... (200; 4.835631ms) May 17 13:25:48.010: INFO: (6) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 5.025531ms) May 17 13:25:48.010: INFO: (6) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 5.093596ms) May 17 13:25:48.010: INFO: (6) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname1/proxy/: tls baz (200; 4.962418ms) May 17 13:25:48.010: INFO: (6) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 5.032453ms) May 17 13:25:48.010: INFO: (6) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: test (200; 5.527236ms) May 17 13:25:48.013: INFO: (7) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 2.937157ms) May 17 13:25:48.013: INFO: (7) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:1080/proxy/: ... (200; 3.067217ms) May 17 13:25:48.014: INFO: (7) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: test<... (200; 3.323105ms) May 17 13:25:48.015: INFO: (7) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 3.811042ms) May 17 13:25:48.015: INFO: (7) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:462/proxy/: tls qux (200; 4.095512ms) May 17 13:25:48.015: INFO: (7) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.474702ms) May 17 13:25:48.015: INFO: (7) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck/proxy/: test (200; 4.044731ms) May 17 13:25:48.015: INFO: (7) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:460/proxy/: tls baz (200; 3.877972ms) May 17 13:25:48.015: INFO: (7) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname2/proxy/: bar (200; 4.63639ms) May 17 13:25:48.015: INFO: (7) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.812174ms) May 17 13:25:48.016: INFO: (7) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname1/proxy/: tls baz (200; 4.772065ms) May 17 13:25:48.016: INFO: (7) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 4.705702ms) May 17 13:25:48.016: INFO: (7) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname1/proxy/: foo (200; 4.689672ms) May 17 13:25:48.016: INFO: (7) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 5.431122ms) May 17 13:25:48.016: INFO: (7) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname2/proxy/: tls qux (200; 5.312617ms) May 17 13:25:48.021: INFO: (8) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 5.362449ms) May 17 13:25:48.022: INFO: (8) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:1080/proxy/: ... (200; 5.792391ms) May 17 13:25:48.022: INFO: (8) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 5.900276ms) May 17 13:25:48.022: INFO: (8) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:462/proxy/: tls qux (200; 5.767205ms) May 17 13:25:48.022: INFO: (8) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname1/proxy/: foo (200; 5.910025ms) May 17 13:25:48.022: INFO: (8) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 5.843594ms) May 17 13:25:48.022: INFO: (8) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 5.926859ms) May 17 13:25:48.022: INFO: (8) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 5.806266ms) May 17 13:25:48.022: INFO: (8) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: test (200; 5.969484ms) May 17 13:25:48.022: INFO: (8) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname1/proxy/: tls baz (200; 5.975855ms) May 17 13:25:48.022: INFO: (8) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname2/proxy/: tls qux (200; 6.19459ms) May 17 13:25:48.022: INFO: (8) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 6.437434ms) May 17 13:25:48.022: INFO: (8) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 6.255208ms) May 17 13:25:48.022: INFO: (8) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname2/proxy/: bar (200; 6.308116ms) May 17 13:25:48.024: INFO: (9) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:462/proxy/: tls qux (200; 1.873951ms) May 17 13:25:48.027: INFO: (9) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:460/proxy/: tls baz (200; 4.08246ms) May 17 13:25:48.027: INFO: (9) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 4.572629ms) May 17 13:25:48.027: INFO: (9) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.627002ms) May 17 13:25:48.028: INFO: (9) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:1080/proxy/: ... (200; 5.019819ms) May 17 13:25:48.028: INFO: (9) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname2/proxy/: bar (200; 5.366932ms) May 17 13:25:48.028: INFO: (9) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 5.358111ms) May 17 13:25:48.028: INFO: (9) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 5.398003ms) May 17 13:25:48.028: INFO: (9) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname2/proxy/: tls qux (200; 5.46613ms) May 17 13:25:48.028: INFO: (9) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck/proxy/: test (200; 5.406709ms) May 17 13:25:48.028: INFO: (9) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 5.393119ms) May 17 13:25:48.028: INFO: (9) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: ... (200; 3.66776ms) May 17 13:25:48.033: INFO: (10) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 3.949581ms) May 17 13:25:48.033: INFO: (10) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck/proxy/: test (200; 3.930969ms) May 17 13:25:48.033: INFO: (10) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 4.452732ms) May 17 13:25:48.034: INFO: (10) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 4.627097ms) May 17 13:25:48.034: INFO: (10) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.697663ms) May 17 13:25:48.034: INFO: (10) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.763778ms) May 17 13:25:48.034: INFO: (10) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:460/proxy/: tls baz (200; 4.715967ms) May 17 13:25:48.034: INFO: (10) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:462/proxy/: tls qux (200; 4.747825ms) May 17 13:25:48.034: INFO: (10) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname2/proxy/: bar (200; 4.803099ms) May 17 13:25:48.034: INFO: (10) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname1/proxy/: foo (200; 4.740173ms) May 17 13:25:48.034: INFO: (10) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 4.778225ms) May 17 13:25:48.034: INFO: (10) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname1/proxy/: tls baz (200; 4.892963ms) May 17 13:25:48.034: INFO: (10) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname2/proxy/: tls qux (200; 4.88183ms) May 17 13:25:48.035: INFO: (10) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: ... (200; 2.884331ms) May 17 13:25:48.039: INFO: (11) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck/proxy/: test (200; 4.284004ms) May 17 13:25:48.039: INFO: (11) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.516171ms) May 17 13:25:48.039: INFO: (11) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 4.493743ms) May 17 13:25:48.039: INFO: (11) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 4.196052ms) May 17 13:25:48.040: INFO: (11) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:460/proxy/: tls baz (200; 4.640538ms) May 17 13:25:48.040: INFO: (11) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 5.007396ms) May 17 13:25:48.040: INFO: (11) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.775522ms) May 17 13:25:48.040: INFO: (11) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: ... (200; 3.40815ms) May 17 13:25:48.044: INFO: (12) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck/proxy/: test (200; 3.32608ms) May 17 13:25:48.044: INFO: (12) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 3.349396ms) May 17 13:25:48.044: INFO: (12) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 3.309488ms) May 17 13:25:48.044: INFO: (12) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 3.631014ms) May 17 13:25:48.044: INFO: (12) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 3.901649ms) May 17 13:25:48.044: INFO: (12) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname1/proxy/: foo (200; 3.980536ms) May 17 13:25:48.045: INFO: (12) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname1/proxy/: tls baz (200; 4.164249ms) May 17 13:25:48.045: INFO: (12) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname2/proxy/: bar (200; 4.490711ms) May 17 13:25:48.045: INFO: (12) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 4.506815ms) May 17 13:25:48.045: INFO: (12) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname2/proxy/: tls qux (200; 4.588606ms) May 17 13:25:48.047: INFO: (13) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 2.13606ms) May 17 13:25:48.048: INFO: (13) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 2.752064ms) May 17 13:25:48.048: INFO: (13) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 2.696678ms) May 17 13:25:48.048: INFO: (13) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:1080/proxy/: ... (200; 2.773514ms) May 17 13:25:48.048: INFO: (13) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck/proxy/: test (200; 2.79839ms) May 17 13:25:48.048: INFO: (13) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 3.029202ms) May 17 13:25:48.048: INFO: (13) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:460/proxy/: tls baz (200; 3.033106ms) May 17 13:25:48.048: INFO: (13) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:462/proxy/: tls qux (200; 3.361389ms) May 17 13:25:48.049: INFO: (13) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname1/proxy/: tls baz (200; 3.884387ms) May 17 13:25:48.049: INFO: (13) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 3.918103ms) May 17 13:25:48.049: INFO: (13) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname1/proxy/: foo (200; 3.890814ms) May 17 13:25:48.049: INFO: (13) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname2/proxy/: bar (200; 3.891045ms) May 17 13:25:48.049: INFO: (13) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname2/proxy/: tls qux (200; 3.986592ms) May 17 13:25:48.049: INFO: (13) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 3.942073ms) May 17 13:25:48.049: INFO: (13) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 4.018694ms) May 17 13:25:48.049: INFO: (13) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: test (200; 3.447ms) May 17 13:25:48.053: INFO: (14) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 3.400804ms) May 17 13:25:48.053: INFO: (14) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: ... (200; 3.786717ms) May 17 13:25:48.053: INFO: (14) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 3.775606ms) May 17 13:25:48.053: INFO: (14) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:462/proxy/: tls qux (200; 3.837154ms) May 17 13:25:48.054: INFO: (14) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname2/proxy/: bar (200; 4.587834ms) May 17 13:25:48.054: INFO: (14) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 4.530131ms) May 17 13:25:48.054: INFO: (14) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname2/proxy/: tls qux (200; 4.575543ms) May 17 13:25:48.054: INFO: (14) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname1/proxy/: foo (200; 4.537768ms) May 17 13:25:48.054: INFO: (14) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname1/proxy/: tls baz (200; 4.561325ms) May 17 13:25:48.054: INFO: (14) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 4.594738ms) May 17 13:25:48.064: INFO: (15) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 10.028664ms) May 17 13:25:48.065: INFO: (15) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:460/proxy/: tls baz (200; 10.831934ms) May 17 13:25:48.066: INFO: (15) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 12.073076ms) May 17 13:25:48.066: INFO: (15) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck/proxy/: test (200; 12.14559ms) May 17 13:25:48.066: INFO: (15) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname1/proxy/: tls baz (200; 12.112811ms) May 17 13:25:48.066: INFO: (15) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:1080/proxy/: ... (200; 12.168349ms) May 17 13:25:48.066: INFO: (15) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 12.123641ms) May 17 13:25:48.066: INFO: (15) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 12.169537ms) May 17 13:25:48.066: INFO: (15) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 12.226649ms) May 17 13:25:48.066: INFO: (15) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname1/proxy/: foo (200; 12.201555ms) May 17 13:25:48.066: INFO: (15) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:462/proxy/: tls qux (200; 12.238612ms) May 17 13:25:48.066: INFO: (15) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 12.203173ms) May 17 13:25:48.066: INFO: (15) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: ... (200; 4.67351ms) May 17 13:25:48.071: INFO: (16) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 4.78757ms) May 17 13:25:48.071: INFO: (16) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname1/proxy/: foo (200; 4.794049ms) May 17 13:25:48.071: INFO: (16) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 4.776898ms) May 17 13:25:48.071: INFO: (16) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.794213ms) May 17 13:25:48.071: INFO: (16) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck/proxy/: test (200; 4.946167ms) May 17 13:25:48.071: INFO: (16) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 4.921701ms) May 17 13:25:48.071: INFO: (16) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 4.981422ms) May 17 13:25:48.071: INFO: (16) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 5.088929ms) May 17 13:25:48.072: INFO: (16) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname1/proxy/: tls baz (200; 5.274919ms) May 17 13:25:48.072: INFO: (16) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 5.272213ms) May 17 13:25:48.072: INFO: (16) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname2/proxy/: tls qux (200; 5.227044ms) May 17 13:25:48.072: INFO: (16) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: ... (200; 3.634705ms) May 17 13:25:48.076: INFO: (17) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: test<... (200; 5.32126ms) May 17 13:25:48.077: INFO: (17) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck/proxy/: test (200; 5.298356ms) May 17 13:25:48.077: INFO: (17) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 5.410536ms) May 17 13:25:48.079: INFO: (17) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname2/proxy/: bar (200; 7.152647ms) May 17 13:25:48.079: INFO: (17) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname2/proxy/: tls qux (200; 7.189901ms) May 17 13:25:48.079: INFO: (17) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 7.259662ms) May 17 13:25:48.084: INFO: (18) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname2/proxy/: bar (200; 5.39886ms) May 17 13:25:48.085: INFO: (18) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 5.443245ms) May 17 13:25:48.085: INFO: (18) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname1/proxy/: tls baz (200; 5.436624ms) May 17 13:25:48.085: INFO: (18) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname1/proxy/: foo (200; 5.502422ms) May 17 13:25:48.085: INFO: (18) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 5.423688ms) May 17 13:25:48.085: INFO: (18) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:1080/proxy/: ... (200; 5.703408ms) May 17 13:25:48.085: INFO: (18) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 5.807945ms) May 17 13:25:48.085: INFO: (18) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 5.935492ms) May 17 13:25:48.085: INFO: (18) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 6.268487ms) May 17 13:25:48.085: INFO: (18) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname2/proxy/: tls qux (200; 6.30656ms) May 17 13:25:48.085: INFO: (18) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 6.351429ms) May 17 13:25:48.085: INFO: (18) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:462/proxy/: tls qux (200; 6.334134ms) May 17 13:25:48.085: INFO: (18) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:460/proxy/: tls baz (200; 6.302595ms) May 17 13:25:48.085: INFO: (18) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck/proxy/: test (200; 6.315125ms) May 17 13:25:48.085: INFO: (18) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: test<... (200; 6.329066ms) May 17 13:25:48.087: INFO: (19) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:460/proxy/: tls baz (200; 1.79519ms) May 17 13:25:48.090: INFO: (19) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 4.72986ms) May 17 13:25:48.090: INFO: (19) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:160/proxy/: foo (200; 4.691205ms) May 17 13:25:48.091: INFO: (19) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck/proxy/: test (200; 4.849957ms) May 17 13:25:48.091: INFO: (19) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 5.052261ms) May 17 13:25:48.091: INFO: (19) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:1080/proxy/: ... (200; 5.034642ms) May 17 13:25:48.092: INFO: (19) /api/v1/namespaces/proxy-8123/pods/http:proxy-service-hxlhr-dj4ck:162/proxy/: bar (200; 6.237358ms) May 17 13:25:48.092: INFO: (19) /api/v1/namespaces/proxy-8123/services/https:proxy-service-hxlhr:tlsportname1/proxy/: tls baz (200; 6.084749ms) May 17 13:25:48.092: INFO: (19) /api/v1/namespaces/proxy-8123/pods/proxy-service-hxlhr-dj4ck:1080/proxy/: test<... (200; 6.001569ms) May 17 13:25:48.092: INFO: (19) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:462/proxy/: tls qux (200; 6.133966ms) May 17 13:25:48.092: INFO: (19) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname2/proxy/: bar (200; 6.16218ms) May 17 13:25:48.092: INFO: (19) /api/v1/namespaces/proxy-8123/services/proxy-service-hxlhr:portname1/proxy/: foo (200; 6.220709ms) May 17 13:25:48.092: INFO: (19) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname2/proxy/: bar (200; 6.191704ms) May 17 13:25:48.092: INFO: (19) /api/v1/namespaces/proxy-8123/services/http:proxy-service-hxlhr:portname1/proxy/: foo (200; 6.236327ms) May 17 13:25:48.092: INFO: (19) /api/v1/namespaces/proxy-8123/pods/https:proxy-service-hxlhr-dj4ck:443/proxy/: >> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-19226d06-67e1-46f8-83c2-065a20fbf9cf STEP: Creating a pod to test consume configMaps May 17 13:25:58.590: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1d2637a7-b434-482b-9d4f-d4974ad35ec5" in namespace "projected-7895" to be "success or failure" May 17 13:25:58.633: INFO: Pod "pod-projected-configmaps-1d2637a7-b434-482b-9d4f-d4974ad35ec5": Phase="Pending", Reason="", readiness=false. Elapsed: 43.095923ms May 17 13:26:00.637: INFO: Pod "pod-projected-configmaps-1d2637a7-b434-482b-9d4f-d4974ad35ec5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046572298s May 17 13:26:02.904: INFO: Pod "pod-projected-configmaps-1d2637a7-b434-482b-9d4f-d4974ad35ec5": Phase="Running", Reason="", readiness=true. Elapsed: 4.313673208s May 17 13:26:04.908: INFO: Pod "pod-projected-configmaps-1d2637a7-b434-482b-9d4f-d4974ad35ec5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.317308294s STEP: Saw pod success May 17 13:26:04.908: INFO: Pod "pod-projected-configmaps-1d2637a7-b434-482b-9d4f-d4974ad35ec5" satisfied condition "success or failure" May 17 13:26:04.910: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-1d2637a7-b434-482b-9d4f-d4974ad35ec5 container projected-configmap-volume-test: STEP: delete the pod May 17 13:26:04.960: INFO: Waiting for pod pod-projected-configmaps-1d2637a7-b434-482b-9d4f-d4974ad35ec5 to disappear May 17 13:26:05.035: INFO: Pod pod-projected-configmaps-1d2637a7-b434-482b-9d4f-d4974ad35ec5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:26:05.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7895" for this suite. May 17 13:26:11.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:26:11.154: INFO: namespace projected-7895 deletion completed in 6.115672149s • [SLOW TEST:12.694 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:26:11.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 13:26:11.315: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6de1d40-9913-47c5-a96e-2e5093b8f18f" in namespace "downward-api-2059" to be "success or failure" May 17 13:26:11.437: INFO: Pod "downwardapi-volume-a6de1d40-9913-47c5-a96e-2e5093b8f18f": Phase="Pending", Reason="", readiness=false. Elapsed: 121.400686ms May 17 13:26:13.441: INFO: Pod "downwardapi-volume-a6de1d40-9913-47c5-a96e-2e5093b8f18f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125673871s May 17 13:26:15.445: INFO: Pod "downwardapi-volume-a6de1d40-9913-47c5-a96e-2e5093b8f18f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.130006479s May 17 13:26:17.448: INFO: Pod "downwardapi-volume-a6de1d40-9913-47c5-a96e-2e5093b8f18f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.132758183s STEP: Saw pod success May 17 13:26:17.448: INFO: Pod "downwardapi-volume-a6de1d40-9913-47c5-a96e-2e5093b8f18f" satisfied condition "success or failure" May 17 13:26:17.450: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-a6de1d40-9913-47c5-a96e-2e5093b8f18f container client-container: STEP: delete the pod May 17 13:26:17.708: INFO: Waiting for pod downwardapi-volume-a6de1d40-9913-47c5-a96e-2e5093b8f18f to disappear May 17 13:26:18.035: INFO: Pod downwardapi-volume-a6de1d40-9913-47c5-a96e-2e5093b8f18f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:26:18.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2059" for this suite. May 17 13:26:24.131: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:26:24.195: INFO: namespace downward-api-2059 deletion completed in 6.156336233s • [SLOW TEST:13.041 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:26:24.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating Redis RC May 17 13:26:24.386: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1742' May 17 13:26:24.735: INFO: stderr: "" May 17 13:26:24.735: INFO: stdout: "replicationcontroller/redis-master created\n" STEP: Waiting for Redis master to start. May 17 13:26:25.739: INFO: Selector matched 1 pods for map[app:redis] May 17 13:26:25.739: INFO: Found 0 / 1 May 17 13:26:27.010: INFO: Selector matched 1 pods for map[app:redis] May 17 13:26:27.010: INFO: Found 0 / 1 May 17 13:26:27.738: INFO: Selector matched 1 pods for map[app:redis] May 17 13:26:27.738: INFO: Found 0 / 1 May 17 13:26:28.952: INFO: Selector matched 1 pods for map[app:redis] May 17 13:26:28.952: INFO: Found 0 / 1 May 17 13:26:29.739: INFO: Selector matched 1 pods for map[app:redis] May 17 13:26:29.739: INFO: Found 0 / 1 May 17 13:26:30.770: INFO: Selector matched 1 pods for map[app:redis] May 17 13:26:30.770: INFO: Found 1 / 1 May 17 13:26:30.770: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 17 13:26:30.773: INFO: Selector matched 1 pods for map[app:redis] May 17 13:26:30.773: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 17 13:26:30.773: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-88vk5 --namespace=kubectl-1742 -p {"metadata":{"annotations":{"x":"y"}}}' May 17 13:26:30.863: INFO: stderr: "" May 17 13:26:30.863: INFO: stdout: "pod/redis-master-88vk5 patched\n" STEP: checking annotations May 17 13:26:30.940: INFO: Selector matched 1 pods for map[app:redis] May 17 13:26:30.940: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:26:30.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1742" for this suite. May 17 13:26:54.992: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:26:55.060: INFO: namespace kubectl-1742 deletion completed in 24.115762814s • [SLOW TEST:30.864 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:26:55.061: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 17 13:26:55.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8418' May 17 13:26:55.276: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 17 13:26:55.276: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 May 17 13:26:57.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-8418' May 17 13:26:57.613: INFO: stderr: "" May 17 13:26:57.613: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:26:57.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8418" for this suite. May 17 13:27:03.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:27:03.703: INFO: namespace kubectl-8418 deletion completed in 6.086839189s • [SLOW TEST:8.642 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:27:03.704: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 17 13:27:03.858: INFO: Waiting up to 5m0s for pod "downward-api-8314d0ec-dd4a-488d-8153-2b35c11d9c5e" in namespace "downward-api-9780" to be "success or failure" May 17 13:27:03.913: INFO: Pod "downward-api-8314d0ec-dd4a-488d-8153-2b35c11d9c5e": Phase="Pending", Reason="", readiness=false. Elapsed: 55.406028ms May 17 13:27:05.918: INFO: Pod "downward-api-8314d0ec-dd4a-488d-8153-2b35c11d9c5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060132656s May 17 13:27:07.995: INFO: Pod "downward-api-8314d0ec-dd4a-488d-8153-2b35c11d9c5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.13705525s May 17 13:27:09.999: INFO: Pod "downward-api-8314d0ec-dd4a-488d-8153-2b35c11d9c5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.141300564s STEP: Saw pod success May 17 13:27:09.999: INFO: Pod "downward-api-8314d0ec-dd4a-488d-8153-2b35c11d9c5e" satisfied condition "success or failure" May 17 13:27:10.002: INFO: Trying to get logs from node iruya-worker2 pod downward-api-8314d0ec-dd4a-488d-8153-2b35c11d9c5e container dapi-container: STEP: delete the pod May 17 13:27:10.116: INFO: Waiting for pod downward-api-8314d0ec-dd4a-488d-8153-2b35c11d9c5e to disappear May 17 13:27:10.127: INFO: Pod downward-api-8314d0ec-dd4a-488d-8153-2b35c11d9c5e no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:27:10.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9780" for this suite. May 17 13:27:16.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:27:16.266: INFO: namespace downward-api-9780 deletion completed in 6.135757583s • [SLOW TEST:12.562 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:27:16.266: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:27:59.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-290" for this suite. May 17 13:28:07.934: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:28:08.001: INFO: namespace container-runtime-290 deletion completed in 8.102941175s • [SLOW TEST:51.735 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:28:08.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-06e53f86-5309-4b9c-b524-ce4d438e19a3 STEP: Creating a pod to test consume configMaps May 17 13:28:08.234: INFO: Waiting up to 5m0s for pod "pod-configmaps-4ec1c06a-eec0-466c-9e25-2a9b9ec919a4" in namespace "configmap-9581" to be "success or failure" May 17 13:28:08.242: INFO: Pod "pod-configmaps-4ec1c06a-eec0-466c-9e25-2a9b9ec919a4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.389524ms May 17 13:28:10.247: INFO: Pod "pod-configmaps-4ec1c06a-eec0-466c-9e25-2a9b9ec919a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012860463s May 17 13:28:12.379: INFO: Pod "pod-configmaps-4ec1c06a-eec0-466c-9e25-2a9b9ec919a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144514749s May 17 13:28:14.382: INFO: Pod "pod-configmaps-4ec1c06a-eec0-466c-9e25-2a9b9ec919a4": Phase="Running", Reason="", readiness=true. Elapsed: 6.147894653s May 17 13:28:16.386: INFO: Pod "pod-configmaps-4ec1c06a-eec0-466c-9e25-2a9b9ec919a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.151640501s STEP: Saw pod success May 17 13:28:16.386: INFO: Pod "pod-configmaps-4ec1c06a-eec0-466c-9e25-2a9b9ec919a4" satisfied condition "success or failure" May 17 13:28:16.388: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-4ec1c06a-eec0-466c-9e25-2a9b9ec919a4 container configmap-volume-test: STEP: delete the pod May 17 13:28:16.412: INFO: Waiting for pod pod-configmaps-4ec1c06a-eec0-466c-9e25-2a9b9ec919a4 to disappear May 17 13:28:16.471: INFO: Pod pod-configmaps-4ec1c06a-eec0-466c-9e25-2a9b9ec919a4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:28:16.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9581" for this suite. May 17 13:28:22.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:28:22.629: INFO: namespace configmap-9581 deletion completed in 6.153412201s • [SLOW TEST:14.628 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:28:22.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Starting the proxy May 17 13:28:22.764: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix003783737/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:28:22.838: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3801" for this suite. May 17 13:28:28.909: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:28:28.982: INFO: namespace kubectl-3801 deletion completed in 6.140670351s • [SLOW TEST:6.353 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Proxy server /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:28:28.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 13:28:29.166: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 17 13:28:29.172: INFO: Number of nodes with available pods: 0 May 17 13:28:29.172: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 17 13:28:29.277: INFO: Number of nodes with available pods: 0 May 17 13:28:29.277: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:30.282: INFO: Number of nodes with available pods: 0 May 17 13:28:30.282: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:31.281: INFO: Number of nodes with available pods: 0 May 17 13:28:31.281: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:32.282: INFO: Number of nodes with available pods: 0 May 17 13:28:32.282: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:33.281: INFO: Number of nodes with available pods: 0 May 17 13:28:33.281: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:34.282: INFO: Number of nodes with available pods: 1 May 17 13:28:34.282: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 17 13:28:34.355: INFO: Number of nodes with available pods: 1 May 17 13:28:34.355: INFO: Number of running nodes: 0, number of available pods: 1 May 17 13:28:35.359: INFO: Number of nodes with available pods: 0 May 17 13:28:35.359: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 17 13:28:35.409: INFO: Number of nodes with available pods: 0 May 17 13:28:35.409: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:36.412: INFO: Number of nodes with available pods: 0 May 17 13:28:36.412: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:37.412: INFO: Number of nodes with available pods: 0 May 17 13:28:37.412: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:38.413: INFO: Number of nodes with available pods: 0 May 17 13:28:38.413: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:39.413: INFO: Number of nodes with available pods: 0 May 17 13:28:39.413: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:40.413: INFO: Number of nodes with available pods: 0 May 17 13:28:40.413: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:41.413: INFO: Number of nodes with available pods: 0 May 17 13:28:41.413: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:42.412: INFO: Number of nodes with available pods: 0 May 17 13:28:42.412: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:43.413: INFO: Number of nodes with available pods: 0 May 17 13:28:43.413: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:44.412: INFO: Number of nodes with available pods: 0 May 17 13:28:44.412: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:45.413: INFO: Number of nodes with available pods: 0 May 17 13:28:45.413: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:46.413: INFO: Number of nodes with available pods: 0 May 17 13:28:46.413: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:47.412: INFO: Number of nodes with available pods: 0 May 17 13:28:47.412: INFO: Node iruya-worker is running more than one daemon pod May 17 13:28:48.412: INFO: Number of nodes with available pods: 1 May 17 13:28:48.413: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2023, will wait for the garbage collector to delete the pods May 17 13:28:48.476: INFO: Deleting DaemonSet.extensions daemon-set took: 5.757319ms May 17 13:28:48.776: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.280795ms May 17 13:29:02.281: INFO: Number of nodes with available pods: 0 May 17 13:29:02.281: INFO: Number of running nodes: 0, number of available pods: 0 May 17 13:29:02.283: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2023/daemonsets","resourceVersion":"11399711"},"items":null} May 17 13:29:02.285: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2023/pods","resourceVersion":"11399711"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:29:02.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2023" for this suite. May 17 13:29:08.371: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:29:08.453: INFO: namespace daemonsets-2023 deletion completed in 6.129392016s • [SLOW TEST:39.470 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:29:08.453: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:29:39.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8292" for this suite. May 17 13:29:45.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:29:45.190: INFO: namespace namespaces-8292 deletion completed in 6.111078956s STEP: Destroying namespace "nsdeletetest-9383" for this suite. May 17 13:29:45.192: INFO: Namespace nsdeletetest-9383 was already deleted STEP: Destroying namespace "nsdeletetest-4580" for this suite. May 17 13:29:51.217: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:29:51.285: INFO: namespace nsdeletetest-4580 deletion completed in 6.092733328s • [SLOW TEST:42.832 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:29:51.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 13:29:51.408: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bd975689-3dea-4ef8-b1f6-d4de2e023587" in namespace "projected-895" to be "success or failure" May 17 13:29:51.469: INFO: Pod "downwardapi-volume-bd975689-3dea-4ef8-b1f6-d4de2e023587": Phase="Pending", Reason="", readiness=false. Elapsed: 61.391251ms May 17 13:29:53.474: INFO: Pod "downwardapi-volume-bd975689-3dea-4ef8-b1f6-d4de2e023587": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065725153s May 17 13:29:55.480: INFO: Pod "downwardapi-volume-bd975689-3dea-4ef8-b1f6-d4de2e023587": Phase="Pending", Reason="", readiness=false. Elapsed: 4.071732403s May 17 13:29:57.483: INFO: Pod "downwardapi-volume-bd975689-3dea-4ef8-b1f6-d4de2e023587": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.075394631s STEP: Saw pod success May 17 13:29:57.483: INFO: Pod "downwardapi-volume-bd975689-3dea-4ef8-b1f6-d4de2e023587" satisfied condition "success or failure" May 17 13:29:57.486: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-bd975689-3dea-4ef8-b1f6-d4de2e023587 container client-container: STEP: delete the pod May 17 13:29:57.635: INFO: Waiting for pod downwardapi-volume-bd975689-3dea-4ef8-b1f6-d4de2e023587 to disappear May 17 13:29:57.670: INFO: Pod downwardapi-volume-bd975689-3dea-4ef8-b1f6-d4de2e023587 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:29:57.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-895" for this suite. May 17 13:30:03.699: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:30:03.764: INFO: namespace projected-895 deletion completed in 6.09041855s • [SLOW TEST:12.479 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:30:03.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 17 13:30:12.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-093de669-abc7-42b7-9d1b-d0a35e8ece7b -c busybox-main-container --namespace=emptydir-3291 -- cat /usr/share/volumeshare/shareddata.txt' May 17 13:30:12.241: INFO: stderr: "I0517 13:30:12.132276 964 log.go:172] (0xc00098a370) (0xc0008f08c0) Create stream\nI0517 13:30:12.132336 964 log.go:172] (0xc00098a370) (0xc0008f08c0) Stream added, broadcasting: 1\nI0517 13:30:12.134291 964 log.go:172] (0xc00098a370) Reply frame received for 1\nI0517 13:30:12.134316 964 log.go:172] (0xc00098a370) (0xc0008f0960) Create stream\nI0517 13:30:12.134324 964 log.go:172] (0xc00098a370) (0xc0008f0960) Stream added, broadcasting: 3\nI0517 13:30:12.134989 964 log.go:172] (0xc00098a370) Reply frame received for 3\nI0517 13:30:12.135014 964 log.go:172] (0xc00098a370) (0xc0005fa280) Create stream\nI0517 13:30:12.135021 964 log.go:172] (0xc00098a370) (0xc0005fa280) Stream added, broadcasting: 5\nI0517 13:30:12.135613 964 log.go:172] (0xc00098a370) Reply frame received for 5\nI0517 13:30:12.233692 964 log.go:172] (0xc00098a370) Data frame received for 5\nI0517 13:30:12.233742 964 log.go:172] (0xc0005fa280) (5) Data frame handling\nI0517 13:30:12.233772 964 log.go:172] (0xc00098a370) Data frame received for 3\nI0517 13:30:12.233787 964 log.go:172] (0xc0008f0960) (3) Data frame handling\nI0517 13:30:12.233796 964 log.go:172] (0xc0008f0960) (3) Data frame sent\nI0517 13:30:12.233804 964 log.go:172] (0xc00098a370) Data frame received for 3\nI0517 13:30:12.233810 964 log.go:172] (0xc0008f0960) (3) Data frame handling\nI0517 13:30:12.235164 964 log.go:172] (0xc00098a370) Data frame received for 1\nI0517 13:30:12.235196 964 log.go:172] (0xc0008f08c0) (1) Data frame handling\nI0517 13:30:12.235208 964 log.go:172] (0xc0008f08c0) (1) Data frame sent\nI0517 13:30:12.235252 964 log.go:172] (0xc00098a370) (0xc0008f08c0) Stream removed, broadcasting: 1\nI0517 13:30:12.235280 964 log.go:172] (0xc00098a370) Go away received\nI0517 13:30:12.235600 964 log.go:172] (0xc00098a370) (0xc0008f08c0) Stream removed, broadcasting: 1\nI0517 13:30:12.235614 964 log.go:172] (0xc00098a370) (0xc0008f0960) Stream removed, broadcasting: 3\nI0517 13:30:12.235619 964 log.go:172] (0xc00098a370) (0xc0005fa280) Stream removed, broadcasting: 5\n" May 17 13:30:12.241: INFO: stdout: "Hello from the busy-box sub-container\n" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:30:12.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3291" for this suite. May 17 13:30:18.280: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:30:18.444: INFO: namespace emptydir-3291 deletion completed in 6.198354405s • [SLOW TEST:14.680 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:30:18.444: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with configMap that has name projected-configmap-test-upd-a88463bf-7020-4b23-a4a0-b6ff4107e386 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-a88463bf-7020-4b23-a4a0-b6ff4107e386 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:31:52.128: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8361" for this suite. May 17 13:32:14.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:32:14.226: INFO: namespace projected-8361 deletion completed in 22.094733651s • [SLOW TEST:115.782 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:32:14.227: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override command May 17 13:32:14.479: INFO: Waiting up to 5m0s for pod "client-containers-5730f15a-e73e-4134-8169-896eaceb7532" in namespace "containers-7526" to be "success or failure" May 17 13:32:14.510: INFO: Pod "client-containers-5730f15a-e73e-4134-8169-896eaceb7532": Phase="Pending", Reason="", readiness=false. Elapsed: 31.306479ms May 17 13:32:16.516: INFO: Pod "client-containers-5730f15a-e73e-4134-8169-896eaceb7532": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037314618s May 17 13:32:18.521: INFO: Pod "client-containers-5730f15a-e73e-4134-8169-896eaceb7532": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042061097s May 17 13:32:20.525: INFO: Pod "client-containers-5730f15a-e73e-4134-8169-896eaceb7532": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046593466s STEP: Saw pod success May 17 13:32:20.525: INFO: Pod "client-containers-5730f15a-e73e-4134-8169-896eaceb7532" satisfied condition "success or failure" May 17 13:32:20.528: INFO: Trying to get logs from node iruya-worker2 pod client-containers-5730f15a-e73e-4134-8169-896eaceb7532 container test-container: STEP: delete the pod May 17 13:32:20.638: INFO: Waiting for pod client-containers-5730f15a-e73e-4134-8169-896eaceb7532 to disappear May 17 13:32:20.680: INFO: Pod client-containers-5730f15a-e73e-4134-8169-896eaceb7532 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:32:20.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7526" for this suite. May 17 13:32:26.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:32:26.826: INFO: namespace containers-7526 deletion completed in 6.140954951s • [SLOW TEST:12.599 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:32:26.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 13:32:26.989: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 17 13:32:29.115: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:32:30.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-1754" for this suite. May 17 13:32:38.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:32:38.635: INFO: namespace replication-controller-1754 deletion completed in 8.165768832s • [SLOW TEST:11.808 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:32:38.635: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 13:32:38.787: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8a5a6ee7-24b7-4987-b1c8-f72be0e2400d" in namespace "downward-api-22" to be "success or failure" May 17 13:32:38.825: INFO: Pod "downwardapi-volume-8a5a6ee7-24b7-4987-b1c8-f72be0e2400d": Phase="Pending", Reason="", readiness=false. Elapsed: 38.828517ms May 17 13:32:40.861: INFO: Pod "downwardapi-volume-8a5a6ee7-24b7-4987-b1c8-f72be0e2400d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074791391s May 17 13:32:42.885: INFO: Pod "downwardapi-volume-8a5a6ee7-24b7-4987-b1c8-f72be0e2400d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.098667651s May 17 13:32:44.889: INFO: Pod "downwardapi-volume-8a5a6ee7-24b7-4987-b1c8-f72be0e2400d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.101934784s STEP: Saw pod success May 17 13:32:44.889: INFO: Pod "downwardapi-volume-8a5a6ee7-24b7-4987-b1c8-f72be0e2400d" satisfied condition "success or failure" May 17 13:32:44.891: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-8a5a6ee7-24b7-4987-b1c8-f72be0e2400d container client-container: STEP: delete the pod May 17 13:32:45.014: INFO: Waiting for pod downwardapi-volume-8a5a6ee7-24b7-4987-b1c8-f72be0e2400d to disappear May 17 13:32:45.051: INFO: Pod downwardapi-volume-8a5a6ee7-24b7-4987-b1c8-f72be0e2400d no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:32:45.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-22" for this suite. May 17 13:32:51.136: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:32:51.208: INFO: namespace downward-api-22 deletion completed in 6.153164785s • [SLOW TEST:12.572 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:32:51.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0517 13:33:01.602339 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 17 13:33:01.602: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:33:01.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7721" for this suite. May 17 13:33:07.630: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:33:07.704: INFO: namespace gc-7721 deletion completed in 6.099416856s • [SLOW TEST:16.496 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:33:07.705: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 17 13:33:07.859: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:33:22.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6309" for this suite. May 17 13:33:28.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:33:28.602: INFO: namespace pods-6309 deletion completed in 6.237363067s • [SLOW TEST:20.897 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:33:28.603: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 17 13:33:28.865: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5044,SelfLink:/api/v1/namespaces/watch-5044/configmaps/e2e-watch-test-watch-closed,UID:23603d18-5fe8-4bb9-84ea-056c2cf00fa9,ResourceVersion:11400553,Generation:0,CreationTimestamp:2020-05-17 13:33:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 17 13:33:28.865: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5044,SelfLink:/api/v1/namespaces/watch-5044/configmaps/e2e-watch-test-watch-closed,UID:23603d18-5fe8-4bb9-84ea-056c2cf00fa9,ResourceVersion:11400554,Generation:0,CreationTimestamp:2020-05-17 13:33:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 17 13:33:28.929: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5044,SelfLink:/api/v1/namespaces/watch-5044/configmaps/e2e-watch-test-watch-closed,UID:23603d18-5fe8-4bb9-84ea-056c2cf00fa9,ResourceVersion:11400555,Generation:0,CreationTimestamp:2020-05-17 13:33:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 17 13:33:28.929: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-5044,SelfLink:/api/v1/namespaces/watch-5044/configmaps/e2e-watch-test-watch-closed,UID:23603d18-5fe8-4bb9-84ea-056c2cf00fa9,ResourceVersion:11400556,Generation:0,CreationTimestamp:2020-05-17 13:33:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:33:28.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5044" for this suite. May 17 13:33:35.080: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:33:35.153: INFO: namespace watch-5044 deletion completed in 6.134348429s • [SLOW TEST:6.551 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:33:35.154: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 17 13:33:35.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4810' May 17 13:33:38.658: INFO: stderr: "" May 17 13:33:38.658: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 17 13:33:38.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4810' May 17 13:33:38.828: INFO: stderr: "" May 17 13:33:38.829: INFO: stdout: "update-demo-nautilus-4ddtw update-demo-nautilus-rdmjx " May 17 13:33:38.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ddtw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4810' May 17 13:33:39.019: INFO: stderr: "" May 17 13:33:39.019: INFO: stdout: "" May 17 13:33:39.019: INFO: update-demo-nautilus-4ddtw is created but not running May 17 13:33:44.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4810' May 17 13:33:44.108: INFO: stderr: "" May 17 13:33:44.108: INFO: stdout: "update-demo-nautilus-4ddtw update-demo-nautilus-rdmjx " May 17 13:33:44.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ddtw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4810' May 17 13:33:44.215: INFO: stderr: "" May 17 13:33:44.215: INFO: stdout: "" May 17 13:33:44.215: INFO: update-demo-nautilus-4ddtw is created but not running May 17 13:33:49.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4810' May 17 13:33:49.318: INFO: stderr: "" May 17 13:33:49.318: INFO: stdout: "update-demo-nautilus-4ddtw update-demo-nautilus-rdmjx " May 17 13:33:49.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ddtw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4810' May 17 13:33:49.425: INFO: stderr: "" May 17 13:33:49.425: INFO: stdout: "true" May 17 13:33:49.425: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4ddtw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4810' May 17 13:33:49.530: INFO: stderr: "" May 17 13:33:49.530: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 17 13:33:49.530: INFO: validating pod update-demo-nautilus-4ddtw May 17 13:33:49.535: INFO: got data: { "image": "nautilus.jpg" } May 17 13:33:49.535: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 17 13:33:49.535: INFO: update-demo-nautilus-4ddtw is verified up and running May 17 13:33:49.535: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rdmjx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4810' May 17 13:33:49.631: INFO: stderr: "" May 17 13:33:49.631: INFO: stdout: "true" May 17 13:33:49.632: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rdmjx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4810' May 17 13:33:49.719: INFO: stderr: "" May 17 13:33:49.720: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 17 13:33:49.720: INFO: validating pod update-demo-nautilus-rdmjx May 17 13:33:49.748: INFO: got data: { "image": "nautilus.jpg" } May 17 13:33:49.748: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 17 13:33:49.748: INFO: update-demo-nautilus-rdmjx is verified up and running STEP: using delete to clean up resources May 17 13:33:49.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4810' May 17 13:33:49.878: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 13:33:49.878: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 17 13:33:49.878: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4810' May 17 13:33:50.034: INFO: stderr: "No resources found.\n" May 17 13:33:50.034: INFO: stdout: "" May 17 13:33:50.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4810 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 17 13:33:50.202: INFO: stderr: "" May 17 13:33:50.202: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:33:50.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4810" for this suite. May 17 13:33:56.510: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:33:56.580: INFO: namespace kubectl-4810 deletion completed in 6.374429434s • [SLOW TEST:21.426 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:33:56.580: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on node default medium May 17 13:33:56.734: INFO: Waiting up to 5m0s for pod "pod-33c0480d-91d6-408a-a708-3b7afd34c90e" in namespace "emptydir-7235" to be "success or failure" May 17 13:33:56.776: INFO: Pod "pod-33c0480d-91d6-408a-a708-3b7afd34c90e": Phase="Pending", Reason="", readiness=false. Elapsed: 41.676657ms May 17 13:33:58.833: INFO: Pod "pod-33c0480d-91d6-408a-a708-3b7afd34c90e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098376884s May 17 13:34:00.837: INFO: Pod "pod-33c0480d-91d6-408a-a708-3b7afd34c90e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103045949s May 17 13:34:02.952: INFO: Pod "pod-33c0480d-91d6-408a-a708-3b7afd34c90e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.218125347s STEP: Saw pod success May 17 13:34:02.952: INFO: Pod "pod-33c0480d-91d6-408a-a708-3b7afd34c90e" satisfied condition "success or failure" May 17 13:34:02.956: INFO: Trying to get logs from node iruya-worker2 pod pod-33c0480d-91d6-408a-a708-3b7afd34c90e container test-container: STEP: delete the pod May 17 13:34:03.513: INFO: Waiting for pod pod-33c0480d-91d6-408a-a708-3b7afd34c90e to disappear May 17 13:34:03.564: INFO: Pod pod-33c0480d-91d6-408a-a708-3b7afd34c90e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:34:03.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7235" for this suite. May 17 13:34:09.670: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:34:09.736: INFO: namespace emptydir-7235 deletion completed in 6.082972355s • [SLOW TEST:13.156 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:34:09.737: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:35:09.950: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2753" for this suite. May 17 13:35:34.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:35:34.441: INFO: namespace container-probe-2753 deletion completed in 24.486860046s • [SLOW TEST:84.705 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:35:34.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 17 13:35:34.728: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 17 13:35:34.785: INFO: Waiting for terminating namespaces to be deleted... May 17 13:35:34.787: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 17 13:35:34.795: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 17 13:35:34.795: INFO: Container kube-proxy ready: true, restart count 0 May 17 13:35:34.795: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 17 13:35:34.795: INFO: Container kindnet-cni ready: true, restart count 0 May 17 13:35:34.795: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 17 13:35:34.801: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 17 13:35:34.802: INFO: Container kube-proxy ready: true, restart count 0 May 17 13:35:34.802: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 17 13:35:34.802: INFO: Container kindnet-cni ready: true, restart count 0 May 17 13:35:34.802: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 17 13:35:34.802: INFO: Container coredns ready: true, restart count 0 May 17 13:35:34.802: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 17 13:35:34.802: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.160fd43903292609], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:35:35.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-9810" for this suite. May 17 13:35:41.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:35:41.926: INFO: namespace sched-pred-9810 deletion completed in 6.103253211s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:7.485 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:35:41.927: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 13:35:42.100: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c3f6ccd-bc1f-4756-9420-71dace5faa9b" in namespace "projected-4853" to be "success or failure" May 17 13:35:42.116: INFO: Pod "downwardapi-volume-7c3f6ccd-bc1f-4756-9420-71dace5faa9b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.261828ms May 17 13:35:44.170: INFO: Pod "downwardapi-volume-7c3f6ccd-bc1f-4756-9420-71dace5faa9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069793667s May 17 13:35:46.175: INFO: Pod "downwardapi-volume-7c3f6ccd-bc1f-4756-9420-71dace5faa9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074467242s May 17 13:35:48.350: INFO: Pod "downwardapi-volume-7c3f6ccd-bc1f-4756-9420-71dace5faa9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.249675339s STEP: Saw pod success May 17 13:35:48.350: INFO: Pod "downwardapi-volume-7c3f6ccd-bc1f-4756-9420-71dace5faa9b" satisfied condition "success or failure" May 17 13:35:48.354: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-7c3f6ccd-bc1f-4756-9420-71dace5faa9b container client-container: STEP: delete the pod May 17 13:35:48.375: INFO: Waiting for pod downwardapi-volume-7c3f6ccd-bc1f-4756-9420-71dace5faa9b to disappear May 17 13:35:48.447: INFO: Pod downwardapi-volume-7c3f6ccd-bc1f-4756-9420-71dace5faa9b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:35:48.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4853" for this suite. May 17 13:35:54.618: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:35:54.691: INFO: namespace projected-4853 deletion completed in 6.166499042s • [SLOW TEST:12.764 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:35:54.691: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:36:01.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-845" for this suite. May 17 13:36:24.071: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:36:24.147: INFO: namespace replication-controller-845 deletion completed in 22.170828028s • [SLOW TEST:29.456 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:36:24.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4107 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a new StatefulSet May 17 13:36:24.379: INFO: Found 0 stateful pods, waiting for 3 May 17 13:36:34.458: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 17 13:36:34.459: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 17 13:36:34.459: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 17 13:36:44.383: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 17 13:36:44.383: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 17 13:36:44.384: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine May 17 13:36:44.410: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 17 13:36:54.577: INFO: Updating stateful set ss2 May 17 13:36:54.623: INFO: Waiting for Pod statefulset-4107/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c STEP: Restoring Pods to the correct revision when they are deleted May 17 13:37:06.087: INFO: Found 2 stateful pods, waiting for 3 May 17 13:37:16.091: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 17 13:37:16.092: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 17 13:37:16.092: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 17 13:37:16.113: INFO: Updating stateful set ss2 May 17 13:37:16.196: INFO: Waiting for Pod statefulset-4107/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 17 13:37:26.278: INFO: Updating stateful set ss2 May 17 13:37:26.397: INFO: Waiting for StatefulSet statefulset-4107/ss2 to complete update May 17 13:37:26.397: INFO: Waiting for Pod statefulset-4107/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 17 13:37:36.403: INFO: Waiting for StatefulSet statefulset-4107/ss2 to complete update May 17 13:37:36.403: INFO: Waiting for Pod statefulset-4107/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c May 17 13:37:46.432: INFO: Waiting for StatefulSet statefulset-4107/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 17 13:37:56.405: INFO: Deleting all statefulset in ns statefulset-4107 May 17 13:37:56.408: INFO: Scaling statefulset ss2 to 0 May 17 13:38:26.454: INFO: Waiting for statefulset status.replicas updated to 0 May 17 13:38:26.456: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:38:26.505: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4107" for this suite. May 17 13:38:34.561: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:38:34.637: INFO: namespace statefulset-4107 deletion completed in 8.129598451s • [SLOW TEST:130.491 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:38:34.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-855 STEP: creating a selector STEP: Creating the service pods in kubernetes May 17 13:38:34.874: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 17 13:39:03.076: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.122:8080/dial?request=hostName&protocol=udp&host=10.244.2.235&port=8081&tries=1'] Namespace:pod-network-test-855 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 13:39:03.076: INFO: >>> kubeConfig: /root/.kube/config I0517 13:39:03.102240 6 log.go:172] (0xc000c9a790) (0xc001c954a0) Create stream I0517 13:39:03.102267 6 log.go:172] (0xc000c9a790) (0xc001c954a0) Stream added, broadcasting: 1 I0517 13:39:03.103796 6 log.go:172] (0xc000c9a790) Reply frame received for 1 I0517 13:39:03.103839 6 log.go:172] (0xc000c9a790) (0xc002f2e820) Create stream I0517 13:39:03.103853 6 log.go:172] (0xc000c9a790) (0xc002f2e820) Stream added, broadcasting: 3 I0517 13:39:03.104767 6 log.go:172] (0xc000c9a790) Reply frame received for 3 I0517 13:39:03.104812 6 log.go:172] (0xc000c9a790) (0xc0014b7540) Create stream I0517 13:39:03.104826 6 log.go:172] (0xc000c9a790) (0xc0014b7540) Stream added, broadcasting: 5 I0517 13:39:03.105919 6 log.go:172] (0xc000c9a790) Reply frame received for 5 I0517 13:39:03.225718 6 log.go:172] (0xc000c9a790) Data frame received for 3 I0517 13:39:03.225749 6 log.go:172] (0xc002f2e820) (3) Data frame handling I0517 13:39:03.225773 6 log.go:172] (0xc002f2e820) (3) Data frame sent I0517 13:39:03.226282 6 log.go:172] (0xc000c9a790) Data frame received for 3 I0517 13:39:03.226306 6 log.go:172] (0xc002f2e820) (3) Data frame handling I0517 13:39:03.226509 6 log.go:172] (0xc000c9a790) Data frame received for 5 I0517 13:39:03.226525 6 log.go:172] (0xc0014b7540) (5) Data frame handling I0517 13:39:03.228025 6 log.go:172] (0xc000c9a790) Data frame received for 1 I0517 13:39:03.228051 6 log.go:172] (0xc001c954a0) (1) Data frame handling I0517 13:39:03.228072 6 log.go:172] (0xc001c954a0) (1) Data frame sent I0517 13:39:03.228086 6 log.go:172] (0xc000c9a790) (0xc001c954a0) Stream removed, broadcasting: 1 I0517 13:39:03.228100 6 log.go:172] (0xc000c9a790) Go away received I0517 13:39:03.228287 6 log.go:172] (0xc000c9a790) (0xc001c954a0) Stream removed, broadcasting: 1 I0517 13:39:03.228317 6 log.go:172] (0xc000c9a790) (0xc002f2e820) Stream removed, broadcasting: 3 I0517 13:39:03.228338 6 log.go:172] (0xc000c9a790) (0xc0014b7540) Stream removed, broadcasting: 5 May 17 13:39:03.228: INFO: Waiting for endpoints: map[] May 17 13:39:03.231: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.122:8080/dial?request=hostName&protocol=udp&host=10.244.1.121&port=8081&tries=1'] Namespace:pod-network-test-855 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 13:39:03.231: INFO: >>> kubeConfig: /root/.kube/config I0517 13:39:03.260775 6 log.go:172] (0xc00028d340) (0xc0005efa40) Create stream I0517 13:39:03.260801 6 log.go:172] (0xc00028d340) (0xc0005efa40) Stream added, broadcasting: 1 I0517 13:39:03.262555 6 log.go:172] (0xc00028d340) Reply frame received for 1 I0517 13:39:03.262581 6 log.go:172] (0xc00028d340) (0xc0005efae0) Create stream I0517 13:39:03.262589 6 log.go:172] (0xc00028d340) (0xc0005efae0) Stream added, broadcasting: 3 I0517 13:39:03.263149 6 log.go:172] (0xc00028d340) Reply frame received for 3 I0517 13:39:03.263170 6 log.go:172] (0xc00028d340) (0xc001c955e0) Create stream I0517 13:39:03.263178 6 log.go:172] (0xc00028d340) (0xc001c955e0) Stream added, broadcasting: 5 I0517 13:39:03.263794 6 log.go:172] (0xc00028d340) Reply frame received for 5 I0517 13:39:03.343891 6 log.go:172] (0xc00028d340) Data frame received for 3 I0517 13:39:03.343916 6 log.go:172] (0xc0005efae0) (3) Data frame handling I0517 13:39:03.343930 6 log.go:172] (0xc0005efae0) (3) Data frame sent I0517 13:39:03.344685 6 log.go:172] (0xc00028d340) Data frame received for 3 I0517 13:39:03.344698 6 log.go:172] (0xc0005efae0) (3) Data frame handling I0517 13:39:03.344713 6 log.go:172] (0xc00028d340) Data frame received for 5 I0517 13:39:03.344720 6 log.go:172] (0xc001c955e0) (5) Data frame handling I0517 13:39:03.346381 6 log.go:172] (0xc00028d340) Data frame received for 1 I0517 13:39:03.346406 6 log.go:172] (0xc0005efa40) (1) Data frame handling I0517 13:39:03.346433 6 log.go:172] (0xc0005efa40) (1) Data frame sent I0517 13:39:03.346450 6 log.go:172] (0xc00028d340) (0xc0005efa40) Stream removed, broadcasting: 1 I0517 13:39:03.346511 6 log.go:172] (0xc00028d340) Go away received I0517 13:39:03.346541 6 log.go:172] (0xc00028d340) (0xc0005efa40) Stream removed, broadcasting: 1 I0517 13:39:03.346584 6 log.go:172] (0xc00028d340) (0xc0005efae0) Stream removed, broadcasting: 3 I0517 13:39:03.346600 6 log.go:172] (0xc00028d340) (0xc001c955e0) Stream removed, broadcasting: 5 May 17 13:39:03.346: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:39:03.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-855" for this suite. May 17 13:39:27.457: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:39:27.621: INFO: namespace pod-network-test-855 deletion completed in 24.270843763s • [SLOW TEST:52.983 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:39:27.621: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-f6362f94-e4fb-4ba1-a335-19d992a1b25e May 17 13:39:27.784: INFO: Pod name my-hostname-basic-f6362f94-e4fb-4ba1-a335-19d992a1b25e: Found 0 pods out of 1 May 17 13:39:32.789: INFO: Pod name my-hostname-basic-f6362f94-e4fb-4ba1-a335-19d992a1b25e: Found 1 pods out of 1 May 17 13:39:32.789: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-f6362f94-e4fb-4ba1-a335-19d992a1b25e" are running May 17 13:39:34.797: INFO: Pod "my-hostname-basic-f6362f94-e4fb-4ba1-a335-19d992a1b25e-l5btw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 13:39:27 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 13:39:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f6362f94-e4fb-4ba1-a335-19d992a1b25e]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 13:39:27 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-f6362f94-e4fb-4ba1-a335-19d992a1b25e]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-17 13:39:27 +0000 UTC Reason: Message:}]) May 17 13:39:34.797: INFO: Trying to dial the pod May 17 13:39:39.852: INFO: Controller my-hostname-basic-f6362f94-e4fb-4ba1-a335-19d992a1b25e: Got expected result from replica 1 [my-hostname-basic-f6362f94-e4fb-4ba1-a335-19d992a1b25e-l5btw]: "my-hostname-basic-f6362f94-e4fb-4ba1-a335-19d992a1b25e-l5btw", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:39:39.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5362" for this suite. May 17 13:39:45.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:39:45.943: INFO: namespace replication-controller-5362 deletion completed in 6.087205093s • [SLOW TEST:18.322 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:39:45.943: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 17 13:39:46.218: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:39:46.262: INFO: Number of nodes with available pods: 0 May 17 13:39:46.262: INFO: Node iruya-worker is running more than one daemon pod May 17 13:39:47.268: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:39:47.271: INFO: Number of nodes with available pods: 0 May 17 13:39:47.271: INFO: Node iruya-worker is running more than one daemon pod May 17 13:39:48.267: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:39:48.270: INFO: Number of nodes with available pods: 0 May 17 13:39:48.270: INFO: Node iruya-worker is running more than one daemon pod May 17 13:39:49.267: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:39:49.271: INFO: Number of nodes with available pods: 0 May 17 13:39:49.271: INFO: Node iruya-worker is running more than one daemon pod May 17 13:39:50.268: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:39:50.272: INFO: Number of nodes with available pods: 0 May 17 13:39:50.272: INFO: Node iruya-worker is running more than one daemon pod May 17 13:39:51.266: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:39:51.268: INFO: Number of nodes with available pods: 2 May 17 13:39:51.268: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 17 13:39:51.295: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:39:51.359: INFO: Number of nodes with available pods: 1 May 17 13:39:51.359: INFO: Node iruya-worker is running more than one daemon pod May 17 13:39:52.364: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:39:52.367: INFO: Number of nodes with available pods: 1 May 17 13:39:52.367: INFO: Node iruya-worker is running more than one daemon pod May 17 13:39:53.402: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:39:53.406: INFO: Number of nodes with available pods: 1 May 17 13:39:53.406: INFO: Node iruya-worker is running more than one daemon pod May 17 13:39:54.395: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:39:54.433: INFO: Number of nodes with available pods: 1 May 17 13:39:54.433: INFO: Node iruya-worker is running more than one daemon pod May 17 13:39:55.408: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:39:55.411: INFO: Number of nodes with available pods: 1 May 17 13:39:55.411: INFO: Node iruya-worker is running more than one daemon pod May 17 13:39:56.363: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:39:56.400: INFO: Number of nodes with available pods: 2 May 17 13:39:56.400: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6648, will wait for the garbage collector to delete the pods May 17 13:39:56.464: INFO: Deleting DaemonSet.extensions daemon-set took: 6.833288ms May 17 13:39:56.765: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.254446ms May 17 13:40:01.987: INFO: Number of nodes with available pods: 0 May 17 13:40:01.987: INFO: Number of running nodes: 0, number of available pods: 0 May 17 13:40:01.990: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6648/daemonsets","resourceVersion":"11401911"},"items":null} May 17 13:40:01.992: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6648/pods","resourceVersion":"11401911"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:40:02.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6648" for this suite. May 17 13:40:10.022: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:40:10.097: INFO: namespace daemonsets-6648 deletion completed in 8.094120089s • [SLOW TEST:24.154 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:40:10.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name secret-emptykey-test-14b28b07-b903-416a-976f-c248461037a6 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:40:10.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-460" for this suite. May 17 13:40:16.415: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:40:16.554: INFO: namespace secrets-460 deletion completed in 6.239164513s • [SLOW TEST:6.456 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:40:16.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 17 13:40:16.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-7276' May 17 13:40:16.858: INFO: stderr: "" May 17 13:40:16.858: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod is running STEP: verifying the pod e2e-test-nginx-pod was created May 17 13:40:21.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-7276 -o json' May 17 13:40:22.004: INFO: stderr: "" May 17 13:40:22.004: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-17T13:40:16Z\",\n \"labels\": {\n \"run\": \"e2e-test-nginx-pod\"\n },\n \"name\": \"e2e-test-nginx-pod\",\n \"namespace\": \"kubectl-7276\",\n \"resourceVersion\": \"11401991\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-7276/pods/e2e-test-nginx-pod\",\n \"uid\": \"39d31c8f-deea-43fe-a697-3ea980cdc649\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-nginx-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-c6w5h\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"iruya-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-c6w5h\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-c6w5h\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-17T13:40:16Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-17T13:40:20Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-17T13:40:20Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-17T13:40:16Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://c551de2853cacdf888c38915fe0570c6fb03f13c2ef6fbd16760424a21f041cc\",\n \"image\": \"docker.io/library/nginx:1.14-alpine\",\n \"imageID\": \"docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n \"lastState\": {},\n \"name\": \"e2e-test-nginx-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-17T13:40:20Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.6\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.238\",\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-17T13:40:16Z\"\n }\n}\n" STEP: replace the image in the pod May 17 13:40:22.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7276' May 17 13:40:22.349: INFO: stderr: "" May 17 13:40:22.349: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n" STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29 [AfterEach] [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726 May 17 13:40:22.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-7276' May 17 13:40:32.308: INFO: stderr: "" May 17 13:40:32.308: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:40:32.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7276" for this suite. May 17 13:40:38.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:40:38.483: INFO: namespace kubectl-7276 deletion completed in 6.1690777s • [SLOW TEST:21.929 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:40:38.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-24964b9e-d3bf-41ec-bf29-871c1b78870e STEP: Creating secret with name s-test-opt-upd-0dbdb493-a7af-4e0b-9c8e-ccf5fd25d21a STEP: Creating the pod STEP: Deleting secret s-test-opt-del-24964b9e-d3bf-41ec-bf29-871c1b78870e STEP: Updating secret s-test-opt-upd-0dbdb493-a7af-4e0b-9c8e-ccf5fd25d21a STEP: Creating secret with name s-test-opt-create-46d6f80c-e114-4407-94a4-533fad3bd036 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:42:01.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7211" for this suite. May 17 13:42:25.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:42:25.629: INFO: namespace secrets-7211 deletion completed in 24.085488503s • [SLOW TEST:107.146 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:42:25.630: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-55ca8070-e674-452e-90a0-16b6cc4c796d STEP: Creating a pod to test consume secrets May 17 13:42:26.172: INFO: Waiting up to 5m0s for pod "pod-secrets-03d415b4-c831-4c90-9824-9b41afa4706b" in namespace "secrets-9958" to be "success or failure" May 17 13:42:26.195: INFO: Pod "pod-secrets-03d415b4-c831-4c90-9824-9b41afa4706b": Phase="Pending", Reason="", readiness=false. Elapsed: 23.154763ms May 17 13:42:28.199: INFO: Pod "pod-secrets-03d415b4-c831-4c90-9824-9b41afa4706b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027294235s May 17 13:42:30.203: INFO: Pod "pod-secrets-03d415b4-c831-4c90-9824-9b41afa4706b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030719652s May 17 13:42:32.523: INFO: Pod "pod-secrets-03d415b4-c831-4c90-9824-9b41afa4706b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.350960098s STEP: Saw pod success May 17 13:42:32.523: INFO: Pod "pod-secrets-03d415b4-c831-4c90-9824-9b41afa4706b" satisfied condition "success or failure" May 17 13:42:32.526: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-03d415b4-c831-4c90-9824-9b41afa4706b container secret-volume-test: STEP: delete the pod May 17 13:42:32.661: INFO: Waiting for pod pod-secrets-03d415b4-c831-4c90-9824-9b41afa4706b to disappear May 17 13:42:32.710: INFO: Pod pod-secrets-03d415b4-c831-4c90-9824-9b41afa4706b no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:42:32.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9958" for this suite. May 17 13:42:38.731: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:42:38.803: INFO: namespace secrets-9958 deletion completed in 6.089483149s STEP: Destroying namespace "secret-namespace-3709" for this suite. May 17 13:42:44.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:42:44.877: INFO: namespace secret-namespace-3709 deletion completed in 6.073350233s • [SLOW TEST:19.247 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:42:44.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 17 13:42:45.103: INFO: Waiting up to 5m0s for pod "pod-9350512e-0330-441f-abdc-85bf77c84108" in namespace "emptydir-1767" to be "success or failure" May 17 13:42:45.112: INFO: Pod "pod-9350512e-0330-441f-abdc-85bf77c84108": Phase="Pending", Reason="", readiness=false. Elapsed: 9.026805ms May 17 13:42:47.164: INFO: Pod "pod-9350512e-0330-441f-abdc-85bf77c84108": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061159946s May 17 13:42:49.168: INFO: Pod "pod-9350512e-0330-441f-abdc-85bf77c84108": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065080649s May 17 13:42:51.172: INFO: Pod "pod-9350512e-0330-441f-abdc-85bf77c84108": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069564781s STEP: Saw pod success May 17 13:42:51.172: INFO: Pod "pod-9350512e-0330-441f-abdc-85bf77c84108" satisfied condition "success or failure" May 17 13:42:51.175: INFO: Trying to get logs from node iruya-worker2 pod pod-9350512e-0330-441f-abdc-85bf77c84108 container test-container: STEP: delete the pod May 17 13:42:51.592: INFO: Waiting for pod pod-9350512e-0330-441f-abdc-85bf77c84108 to disappear May 17 13:42:51.634: INFO: Pod pod-9350512e-0330-441f-abdc-85bf77c84108 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:42:51.634: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1767" for this suite. May 17 13:42:57.648: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:42:57.717: INFO: namespace emptydir-1767 deletion completed in 6.079107449s • [SLOW TEST:12.840 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:42:57.717: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating pod May 17 13:43:03.978: INFO: Pod pod-hostip-5b4e733c-2f58-4b40-86f5-230cf3777568 has hostIP: 172.17.0.5 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:43:03.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8221" for this suite. May 17 13:43:26.059: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:43:26.130: INFO: namespace pods-8221 deletion completed in 22.148691349s • [SLOW TEST:28.413 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:43:26.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8321.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8321.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8321.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8321.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8321.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8321.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 13:43:34.493: INFO: DNS probes using dns-8321/dns-test-dedc7285-2cfc-43ae-bfa0-2a2b8f78793d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:43:34.573: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8321" for this suite. May 17 13:43:40.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:43:40.874: INFO: namespace dns-8321 deletion completed in 6.292335068s • [SLOW TEST:14.744 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:43:40.874: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 13:43:41.165: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"7e25566f-860b-465d-9660-3db67be5d7bd", Controller:(*bool)(0xc001e39292), BlockOwnerDeletion:(*bool)(0xc001e39293)}} May 17 13:43:41.174: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e6f78dd3-c57c-49e1-91b0-b1bb31d74737", Controller:(*bool)(0xc0019cb3ba), BlockOwnerDeletion:(*bool)(0xc0019cb3bb)}} May 17 13:43:41.219: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"ac07bde6-9702-4555-927a-3d8c9085e23c", Controller:(*bool)(0xc002a56582), BlockOwnerDeletion:(*bool)(0xc002a56583)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:43:46.415: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9183" for this suite. May 17 13:43:52.491: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:43:52.568: INFO: namespace gc-9183 deletion completed in 6.149681582s • [SLOW TEST:11.694 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:43:52.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-2db19207-9cb2-4fda-9b3a-db19ac8cdac4 STEP: Creating a pod to test consume secrets May 17 13:43:52.731: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-a6720f08-c7f9-4b02-8cd2-a61294803039" in namespace "projected-5668" to be "success or failure" May 17 13:43:52.811: INFO: Pod "pod-projected-secrets-a6720f08-c7f9-4b02-8cd2-a61294803039": Phase="Pending", Reason="", readiness=false. Elapsed: 79.601024ms May 17 13:43:54.815: INFO: Pod "pod-projected-secrets-a6720f08-c7f9-4b02-8cd2-a61294803039": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084215712s May 17 13:43:56.943: INFO: Pod "pod-projected-secrets-a6720f08-c7f9-4b02-8cd2-a61294803039": Phase="Pending", Reason="", readiness=false. Elapsed: 4.212001417s May 17 13:43:58.947: INFO: Pod "pod-projected-secrets-a6720f08-c7f9-4b02-8cd2-a61294803039": Phase="Running", Reason="", readiness=true. Elapsed: 6.215356175s May 17 13:44:00.951: INFO: Pod "pod-projected-secrets-a6720f08-c7f9-4b02-8cd2-a61294803039": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.219281556s STEP: Saw pod success May 17 13:44:00.951: INFO: Pod "pod-projected-secrets-a6720f08-c7f9-4b02-8cd2-a61294803039" satisfied condition "success or failure" May 17 13:44:00.954: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-a6720f08-c7f9-4b02-8cd2-a61294803039 container projected-secret-volume-test: STEP: delete the pod May 17 13:44:00.988: INFO: Waiting for pod pod-projected-secrets-a6720f08-c7f9-4b02-8cd2-a61294803039 to disappear May 17 13:44:00.999: INFO: Pod pod-projected-secrets-a6720f08-c7f9-4b02-8cd2-a61294803039 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:44:00.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5668" for this suite. May 17 13:44:07.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:44:07.189: INFO: namespace projected-5668 deletion completed in 6.187259813s • [SLOW TEST:14.621 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:44:07.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-map-dd83e647-b192-41e7-b25a-8c18b60357a2 STEP: Creating a pod to test consume secrets May 17 13:44:07.337: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bd2715d0-658a-44a9-bb77-f309fa1b3515" in namespace "projected-1059" to be "success or failure" May 17 13:44:07.385: INFO: Pod "pod-projected-secrets-bd2715d0-658a-44a9-bb77-f309fa1b3515": Phase="Pending", Reason="", readiness=false. Elapsed: 47.844131ms May 17 13:44:09.390: INFO: Pod "pod-projected-secrets-bd2715d0-658a-44a9-bb77-f309fa1b3515": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052310979s May 17 13:44:11.394: INFO: Pod "pod-projected-secrets-bd2715d0-658a-44a9-bb77-f309fa1b3515": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056854729s May 17 13:44:13.404: INFO: Pod "pod-projected-secrets-bd2715d0-658a-44a9-bb77-f309fa1b3515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.06685513s STEP: Saw pod success May 17 13:44:13.404: INFO: Pod "pod-projected-secrets-bd2715d0-658a-44a9-bb77-f309fa1b3515" satisfied condition "success or failure" May 17 13:44:13.407: INFO: Trying to get logs from node iruya-worker pod pod-projected-secrets-bd2715d0-658a-44a9-bb77-f309fa1b3515 container projected-secret-volume-test: STEP: delete the pod May 17 13:44:13.473: INFO: Waiting for pod pod-projected-secrets-bd2715d0-658a-44a9-bb77-f309fa1b3515 to disappear May 17 13:44:14.009: INFO: Pod pod-projected-secrets-bd2715d0-658a-44a9-bb77-f309fa1b3515 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:44:14.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1059" for this suite. May 17 13:44:20.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:44:20.331: INFO: namespace projected-1059 deletion completed in 6.227545405s • [SLOW TEST:13.142 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:44:20.332: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 13:44:24.577: INFO: Waiting up to 5m0s for pod "client-envvars-d1fc947f-88e2-408b-b446-ed9c54d5c3b3" in namespace "pods-2331" to be "success or failure" May 17 13:44:24.599: INFO: Pod "client-envvars-d1fc947f-88e2-408b-b446-ed9c54d5c3b3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.881674ms May 17 13:44:26.656: INFO: Pod "client-envvars-d1fc947f-88e2-408b-b446-ed9c54d5c3b3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078892783s May 17 13:44:28.686: INFO: Pod "client-envvars-d1fc947f-88e2-408b-b446-ed9c54d5c3b3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108948744s May 17 13:44:30.830: INFO: Pod "client-envvars-d1fc947f-88e2-408b-b446-ed9c54d5c3b3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.253256492s STEP: Saw pod success May 17 13:44:30.830: INFO: Pod "client-envvars-d1fc947f-88e2-408b-b446-ed9c54d5c3b3" satisfied condition "success or failure" May 17 13:44:30.834: INFO: Trying to get logs from node iruya-worker pod client-envvars-d1fc947f-88e2-408b-b446-ed9c54d5c3b3 container env3cont: STEP: delete the pod May 17 13:44:31.171: INFO: Waiting for pod client-envvars-d1fc947f-88e2-408b-b446-ed9c54d5c3b3 to disappear May 17 13:44:31.200: INFO: Pod client-envvars-d1fc947f-88e2-408b-b446-ed9c54d5c3b3 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:44:31.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2331" for this suite. May 17 13:45:13.251: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:45:13.316: INFO: namespace pods-2331 deletion completed in 42.112280003s • [SLOW TEST:52.985 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:45:13.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override arguments May 17 13:45:13.473: INFO: Waiting up to 5m0s for pod "client-containers-22ef3627-a6b5-4870-96c0-4bd4ce374343" in namespace "containers-8007" to be "success or failure" May 17 13:45:13.510: INFO: Pod "client-containers-22ef3627-a6b5-4870-96c0-4bd4ce374343": Phase="Pending", Reason="", readiness=false. Elapsed: 36.101802ms May 17 13:45:15.844: INFO: Pod "client-containers-22ef3627-a6b5-4870-96c0-4bd4ce374343": Phase="Pending", Reason="", readiness=false. Elapsed: 2.370892111s May 17 13:45:17.848: INFO: Pod "client-containers-22ef3627-a6b5-4870-96c0-4bd4ce374343": Phase="Pending", Reason="", readiness=false. Elapsed: 4.374718203s May 17 13:45:19.852: INFO: Pod "client-containers-22ef3627-a6b5-4870-96c0-4bd4ce374343": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.37897187s STEP: Saw pod success May 17 13:45:19.852: INFO: Pod "client-containers-22ef3627-a6b5-4870-96c0-4bd4ce374343" satisfied condition "success or failure" May 17 13:45:19.855: INFO: Trying to get logs from node iruya-worker2 pod client-containers-22ef3627-a6b5-4870-96c0-4bd4ce374343 container test-container: STEP: delete the pod May 17 13:45:20.012: INFO: Waiting for pod client-containers-22ef3627-a6b5-4870-96c0-4bd4ce374343 to disappear May 17 13:45:20.020: INFO: Pod client-containers-22ef3627-a6b5-4870-96c0-4bd4ce374343 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:45:20.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8007" for this suite. May 17 13:45:26.041: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:45:26.108: INFO: namespace containers-8007 deletion completed in 6.084735867s • [SLOW TEST:12.791 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:45:26.108: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 13:45:26.277: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 17 13:45:26.340: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:26.423: INFO: Number of nodes with available pods: 0 May 17 13:45:26.423: INFO: Node iruya-worker is running more than one daemon pod May 17 13:45:27.428: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:27.431: INFO: Number of nodes with available pods: 0 May 17 13:45:27.431: INFO: Node iruya-worker is running more than one daemon pod May 17 13:45:28.428: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:28.431: INFO: Number of nodes with available pods: 0 May 17 13:45:28.431: INFO: Node iruya-worker is running more than one daemon pod May 17 13:45:29.436: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:29.471: INFO: Number of nodes with available pods: 0 May 17 13:45:29.471: INFO: Node iruya-worker is running more than one daemon pod May 17 13:45:30.428: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:30.431: INFO: Number of nodes with available pods: 0 May 17 13:45:30.431: INFO: Node iruya-worker is running more than one daemon pod May 17 13:45:31.428: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:31.432: INFO: Number of nodes with available pods: 0 May 17 13:45:31.432: INFO: Node iruya-worker is running more than one daemon pod May 17 13:45:32.478: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:32.481: INFO: Number of nodes with available pods: 2 May 17 13:45:32.481: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 17 13:45:32.656: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:32.656: INFO: Wrong image for pod: daemon-set-wglhg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:32.677: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:33.682: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:33.682: INFO: Wrong image for pod: daemon-set-wglhg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:33.686: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:34.795: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:34.795: INFO: Wrong image for pod: daemon-set-wglhg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:34.800: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:35.711: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:35.711: INFO: Wrong image for pod: daemon-set-wglhg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:35.715: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:36.681: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:36.681: INFO: Wrong image for pod: daemon-set-wglhg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:36.686: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:37.682: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:37.682: INFO: Wrong image for pod: daemon-set-wglhg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:37.682: INFO: Pod daemon-set-wglhg is not available May 17 13:45:37.686: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:38.682: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:38.682: INFO: Wrong image for pod: daemon-set-wglhg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:38.682: INFO: Pod daemon-set-wglhg is not available May 17 13:45:38.686: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:39.682: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:39.682: INFO: Wrong image for pod: daemon-set-wglhg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:39.682: INFO: Pod daemon-set-wglhg is not available May 17 13:45:39.685: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:40.711: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:40.711: INFO: Wrong image for pod: daemon-set-wglhg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:40.711: INFO: Pod daemon-set-wglhg is not available May 17 13:45:40.715: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:41.681: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:41.681: INFO: Wrong image for pod: daemon-set-wglhg. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:41.681: INFO: Pod daemon-set-wglhg is not available May 17 13:45:41.686: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:42.693: INFO: Pod daemon-set-9gdjd is not available May 17 13:45:42.693: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:42.697: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:43.711: INFO: Pod daemon-set-9gdjd is not available May 17 13:45:43.711: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:43.715: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:44.687: INFO: Pod daemon-set-9gdjd is not available May 17 13:45:44.687: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:44.691: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:45.843: INFO: Pod daemon-set-9gdjd is not available May 17 13:45:45.843: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:45.847: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:46.705: INFO: Pod daemon-set-9gdjd is not available May 17 13:45:46.705: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:46.709: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:47.684: INFO: Pod daemon-set-9gdjd is not available May 17 13:45:47.684: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:47.688: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:48.681: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:48.684: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:49.915: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:50.059: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:50.681: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:50.681: INFO: Pod daemon-set-t6bvb is not available May 17 13:45:50.684: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:51.681: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:51.681: INFO: Pod daemon-set-t6bvb is not available May 17 13:45:51.686: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:52.681: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:52.681: INFO: Pod daemon-set-t6bvb is not available May 17 13:45:52.685: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:53.681: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:53.681: INFO: Pod daemon-set-t6bvb is not available May 17 13:45:53.684: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:54.681: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:54.681: INFO: Pod daemon-set-t6bvb is not available May 17 13:45:54.685: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:55.682: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:55.682: INFO: Pod daemon-set-t6bvb is not available May 17 13:45:55.686: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:56.684: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:56.684: INFO: Pod daemon-set-t6bvb is not available May 17 13:45:56.689: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:57.681: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:57.681: INFO: Pod daemon-set-t6bvb is not available May 17 13:45:57.689: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:58.681: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:58.681: INFO: Pod daemon-set-t6bvb is not available May 17 13:45:58.685: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:45:59.681: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:45:59.681: INFO: Pod daemon-set-t6bvb is not available May 17 13:45:59.684: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:46:00.681: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:46:00.681: INFO: Pod daemon-set-t6bvb is not available May 17 13:46:00.686: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:46:01.706: INFO: Wrong image for pod: daemon-set-t6bvb. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine. May 17 13:46:01.706: INFO: Pod daemon-set-t6bvb is not available May 17 13:46:01.710: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:46:02.692: INFO: Pod daemon-set-2x6jp is not available May 17 13:46:02.696: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 17 13:46:02.699: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:46:02.702: INFO: Number of nodes with available pods: 1 May 17 13:46:02.702: INFO: Node iruya-worker is running more than one daemon pod May 17 13:46:03.730: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:46:03.733: INFO: Number of nodes with available pods: 1 May 17 13:46:03.733: INFO: Node iruya-worker is running more than one daemon pod May 17 13:46:04.736: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:46:04.740: INFO: Number of nodes with available pods: 1 May 17 13:46:04.740: INFO: Node iruya-worker is running more than one daemon pod May 17 13:46:05.711: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:46:05.714: INFO: Number of nodes with available pods: 1 May 17 13:46:05.714: INFO: Node iruya-worker is running more than one daemon pod May 17 13:46:06.850: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:46:06.853: INFO: Number of nodes with available pods: 1 May 17 13:46:06.853: INFO: Node iruya-worker is running more than one daemon pod May 17 13:46:07.958: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:46:07.962: INFO: Number of nodes with available pods: 1 May 17 13:46:07.962: INFO: Node iruya-worker is running more than one daemon pod May 17 13:46:08.915: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:46:08.919: INFO: Number of nodes with available pods: 1 May 17 13:46:08.919: INFO: Node iruya-worker is running more than one daemon pod May 17 13:46:09.706: INFO: DaemonSet pods can't tolerate node iruya-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 17 13:46:09.710: INFO: Number of nodes with available pods: 2 May 17 13:46:09.710: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1812, will wait for the garbage collector to delete the pods May 17 13:46:09.810: INFO: Deleting DaemonSet.extensions daemon-set took: 35.371966ms May 17 13:46:10.210: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.267762ms May 17 13:46:22.814: INFO: Number of nodes with available pods: 0 May 17 13:46:22.814: INFO: Number of running nodes: 0, number of available pods: 0 May 17 13:46:22.816: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1812/daemonsets","resourceVersion":"11403125"},"items":null} May 17 13:46:22.824: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1812/pods","resourceVersion":"11403126"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:46:22.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-1812" for this suite. May 17 13:46:30.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:46:30.925: INFO: namespace daemonsets-1812 deletion completed in 8.08874773s • [SLOW TEST:64.817 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:46:30.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 17 13:46:44.868: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 13:46:44.933: INFO: Pod pod-with-poststart-exec-hook still exists May 17 13:46:46.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 13:46:46.937: INFO: Pod pod-with-poststart-exec-hook still exists May 17 13:46:48.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 13:46:48.937: INFO: Pod pod-with-poststart-exec-hook still exists May 17 13:46:50.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 13:46:50.937: INFO: Pod pod-with-poststart-exec-hook still exists May 17 13:46:52.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 13:46:52.938: INFO: Pod pod-with-poststart-exec-hook still exists May 17 13:46:54.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 13:46:54.937: INFO: Pod pod-with-poststart-exec-hook still exists May 17 13:46:56.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 13:46:56.938: INFO: Pod pod-with-poststart-exec-hook still exists May 17 13:46:58.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 13:46:58.938: INFO: Pod pod-with-poststart-exec-hook still exists May 17 13:47:00.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 13:47:00.938: INFO: Pod pod-with-poststart-exec-hook still exists May 17 13:47:02.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 13:47:02.987: INFO: Pod pod-with-poststart-exec-hook still exists May 17 13:47:04.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 13:47:05.065: INFO: Pod pod-with-poststart-exec-hook still exists May 17 13:47:06.933: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 17 13:47:06.938: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:47:06.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2530" for this suite. May 17 13:47:28.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:47:29.091: INFO: namespace container-lifecycle-hook-2530 deletion completed in 22.148901237s • [SLOW TEST:58.165 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:47:29.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-50cab668-739a-4e8c-94fa-7ec3960e408b STEP: Creating the pod STEP: Updating configmap configmap-test-upd-50cab668-739a-4e8c-94fa-7ec3960e408b STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:47:37.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3589" for this suite. May 17 13:48:01.701: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:48:01.775: INFO: namespace configmap-3589 deletion completed in 24.152925553s • [SLOW TEST:32.684 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:48:01.776: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 13:48:01.942: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5006fbb-be27-435d-a497-3508e824046e" in namespace "downward-api-1516" to be "success or failure" May 17 13:48:01.951: INFO: Pod "downwardapi-volume-b5006fbb-be27-435d-a497-3508e824046e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.182035ms May 17 13:48:03.956: INFO: Pod "downwardapi-volume-b5006fbb-be27-435d-a497-3508e824046e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013652388s May 17 13:48:05.960: INFO: Pod "downwardapi-volume-b5006fbb-be27-435d-a497-3508e824046e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017401075s May 17 13:48:07.964: INFO: Pod "downwardapi-volume-b5006fbb-be27-435d-a497-3508e824046e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.02201568s STEP: Saw pod success May 17 13:48:07.964: INFO: Pod "downwardapi-volume-b5006fbb-be27-435d-a497-3508e824046e" satisfied condition "success or failure" May 17 13:48:07.967: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-b5006fbb-be27-435d-a497-3508e824046e container client-container: STEP: delete the pod May 17 13:48:08.217: INFO: Waiting for pod downwardapi-volume-b5006fbb-be27-435d-a497-3508e824046e to disappear May 17 13:48:08.354: INFO: Pod downwardapi-volume-b5006fbb-be27-435d-a497-3508e824046e no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:48:08.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1516" for this suite. May 17 13:48:14.395: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:48:14.552: INFO: namespace downward-api-1516 deletion completed in 6.194000401s • [SLOW TEST:12.776 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:48:14.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir volume type on tmpfs May 17 13:48:14.816: INFO: Waiting up to 5m0s for pod "pod-d61754a0-6eed-4fbd-9ad7-89c6c0a25916" in namespace "emptydir-526" to be "success or failure" May 17 13:48:14.857: INFO: Pod "pod-d61754a0-6eed-4fbd-9ad7-89c6c0a25916": Phase="Pending", Reason="", readiness=false. Elapsed: 40.419197ms May 17 13:48:16.863: INFO: Pod "pod-d61754a0-6eed-4fbd-9ad7-89c6c0a25916": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047155926s May 17 13:48:19.184: INFO: Pod "pod-d61754a0-6eed-4fbd-9ad7-89c6c0a25916": Phase="Running", Reason="", readiness=true. Elapsed: 4.368114492s May 17 13:48:21.189: INFO: Pod "pod-d61754a0-6eed-4fbd-9ad7-89c6c0a25916": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.372807494s STEP: Saw pod success May 17 13:48:21.189: INFO: Pod "pod-d61754a0-6eed-4fbd-9ad7-89c6c0a25916" satisfied condition "success or failure" May 17 13:48:21.192: INFO: Trying to get logs from node iruya-worker2 pod pod-d61754a0-6eed-4fbd-9ad7-89c6c0a25916 container test-container: STEP: delete the pod May 17 13:48:21.308: INFO: Waiting for pod pod-d61754a0-6eed-4fbd-9ad7-89c6c0a25916 to disappear May 17 13:48:21.314: INFO: Pod pod-d61754a0-6eed-4fbd-9ad7-89c6c0a25916 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:48:21.315: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-526" for this suite. May 17 13:48:27.355: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:48:27.436: INFO: namespace emptydir-526 deletion completed in 6.117269928s • [SLOW TEST:12.884 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:48:27.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 13:48:27.595: INFO: (0) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.865485ms) May 17 13:48:27.598: INFO: (1) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.523663ms) May 17 13:48:27.601: INFO: (2) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.46342ms) May 17 13:48:27.603: INFO: (3) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.654143ms) May 17 13:48:27.684: INFO: (4) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 80.655685ms) May 17 13:48:27.729: INFO: (5) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 45.036614ms) May 17 13:48:27.776: INFO: (6) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 46.712519ms) May 17 13:48:27.781: INFO: (7) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.698629ms) May 17 13:48:27.822: INFO: (8) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 41.164389ms) May 17 13:48:27.825: INFO: (9) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.681066ms) May 17 13:48:27.829: INFO: (10) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.689086ms) May 17 13:48:27.833: INFO: (11) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 4.152519ms) May 17 13:48:27.836: INFO: (12) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.894383ms) May 17 13:48:27.839: INFO: (13) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.990255ms) May 17 13:48:27.842: INFO: (14) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.481693ms) May 17 13:48:27.845: INFO: (15) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 3.020473ms) May 17 13:48:27.848: INFO: (16) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.890964ms) May 17 13:48:27.850: INFO: (17) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.26787ms) May 17 13:48:27.853: INFO: (18) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.806774ms) May 17 13:48:27.855: INFO: (19) /api/v1/nodes/iruya-worker:10250/proxy/logs/:
containers/
pods/
(200; 2.091652ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:48:27.855: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3996" for this suite. May 17 13:48:33.882: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:48:33.960: INFO: namespace proxy-3996 deletion completed in 6.102295604s • [SLOW TEST:6.523 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:48:33.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1914 STEP: creating a selector STEP: Creating the service pods in kubernetes May 17 13:48:34.129: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 17 13:49:06.333: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.245 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1914 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 13:49:06.333: INFO: >>> kubeConfig: /root/.kube/config I0517 13:49:06.363833 6 log.go:172] (0xc0009f24d0) (0xc00030cdc0) Create stream I0517 13:49:06.363861 6 log.go:172] (0xc0009f24d0) (0xc00030cdc0) Stream added, broadcasting: 1 I0517 13:49:06.365751 6 log.go:172] (0xc0009f24d0) Reply frame received for 1 I0517 13:49:06.365794 6 log.go:172] (0xc0009f24d0) (0xc0001174a0) Create stream I0517 13:49:06.365805 6 log.go:172] (0xc0009f24d0) (0xc0001174a0) Stream added, broadcasting: 3 I0517 13:49:06.366691 6 log.go:172] (0xc0009f24d0) Reply frame received for 3 I0517 13:49:06.366717 6 log.go:172] (0xc0009f24d0) (0xc000b62000) Create stream I0517 13:49:06.366727 6 log.go:172] (0xc0009f24d0) (0xc000b62000) Stream added, broadcasting: 5 I0517 13:49:06.367570 6 log.go:172] (0xc0009f24d0) Reply frame received for 5 I0517 13:49:07.526256 6 log.go:172] (0xc0009f24d0) Data frame received for 5 I0517 13:49:07.526288 6 log.go:172] (0xc000b62000) (5) Data frame handling I0517 13:49:07.526321 6 log.go:172] (0xc0009f24d0) Data frame received for 3 I0517 13:49:07.526362 6 log.go:172] (0xc0001174a0) (3) Data frame handling I0517 13:49:07.526374 6 log.go:172] (0xc0001174a0) (3) Data frame sent I0517 13:49:07.526385 6 log.go:172] (0xc0009f24d0) Data frame received for 3 I0517 13:49:07.526395 6 log.go:172] (0xc0001174a0) (3) Data frame handling I0517 13:49:07.528064 6 log.go:172] (0xc0009f24d0) Data frame received for 1 I0517 13:49:07.528092 6 log.go:172] (0xc00030cdc0) (1) Data frame handling I0517 13:49:07.528126 6 log.go:172] (0xc00030cdc0) (1) Data frame sent I0517 13:49:07.528143 6 log.go:172] (0xc0009f24d0) (0xc00030cdc0) Stream removed, broadcasting: 1 I0517 13:49:07.528163 6 log.go:172] (0xc0009f24d0) Go away received I0517 13:49:07.528299 6 log.go:172] (0xc0009f24d0) (0xc00030cdc0) Stream removed, broadcasting: 1 I0517 13:49:07.528323 6 log.go:172] (0xc0009f24d0) (0xc0001174a0) Stream removed, broadcasting: 3 I0517 13:49:07.528334 6 log.go:172] (0xc0009f24d0) (0xc000b62000) Stream removed, broadcasting: 5 May 17 13:49:07.528: INFO: Found all expected endpoints: [netserver-0] May 17 13:49:07.531: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.141 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1914 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 13:49:07.531: INFO: >>> kubeConfig: /root/.kube/config I0517 13:49:07.562211 6 log.go:172] (0xc000df3340) (0xc0001177c0) Create stream I0517 13:49:07.562238 6 log.go:172] (0xc000df3340) (0xc0001177c0) Stream added, broadcasting: 1 I0517 13:49:07.563806 6 log.go:172] (0xc000df3340) Reply frame received for 1 I0517 13:49:07.563831 6 log.go:172] (0xc000df3340) (0xc00144c0a0) Create stream I0517 13:49:07.563838 6 log.go:172] (0xc000df3340) (0xc00144c0a0) Stream added, broadcasting: 3 I0517 13:49:07.564629 6 log.go:172] (0xc000df3340) Reply frame received for 3 I0517 13:49:07.564653 6 log.go:172] (0xc000df3340) (0xc00144c280) Create stream I0517 13:49:07.564660 6 log.go:172] (0xc000df3340) (0xc00144c280) Stream added, broadcasting: 5 I0517 13:49:07.565548 6 log.go:172] (0xc000df3340) Reply frame received for 5 I0517 13:49:08.623533 6 log.go:172] (0xc000df3340) Data frame received for 3 I0517 13:49:08.623596 6 log.go:172] (0xc00144c0a0) (3) Data frame handling I0517 13:49:08.623636 6 log.go:172] (0xc00144c0a0) (3) Data frame sent I0517 13:49:08.623659 6 log.go:172] (0xc000df3340) Data frame received for 3 I0517 13:49:08.623678 6 log.go:172] (0xc00144c0a0) (3) Data frame handling I0517 13:49:08.623702 6 log.go:172] (0xc000df3340) Data frame received for 5 I0517 13:49:08.623727 6 log.go:172] (0xc00144c280) (5) Data frame handling I0517 13:49:08.626089 6 log.go:172] (0xc000df3340) Data frame received for 1 I0517 13:49:08.626133 6 log.go:172] (0xc0001177c0) (1) Data frame handling I0517 13:49:08.626155 6 log.go:172] (0xc0001177c0) (1) Data frame sent I0517 13:49:08.626177 6 log.go:172] (0xc000df3340) (0xc0001177c0) Stream removed, broadcasting: 1 I0517 13:49:08.626197 6 log.go:172] (0xc000df3340) Go away received I0517 13:49:08.626411 6 log.go:172] (0xc000df3340) (0xc0001177c0) Stream removed, broadcasting: 1 I0517 13:49:08.626451 6 log.go:172] (0xc000df3340) (0xc00144c0a0) Stream removed, broadcasting: 3 I0517 13:49:08.626485 6 log.go:172] (0xc000df3340) (0xc00144c280) Stream removed, broadcasting: 5 May 17 13:49:08.626: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:49:08.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1914" for this suite. May 17 13:49:32.672: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:49:32.743: INFO: namespace pod-network-test-1914 deletion completed in 24.11287631s • [SLOW TEST:58.783 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:49:32.744: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 13:49:32.887: INFO: Creating deployment "test-recreate-deployment" May 17 13:49:32.972: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 17 13:49:33.012: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 17 13:49:35.019: INFO: Waiting deployment "test-recreate-deployment" to complete May 17 13:49:35.022: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320173, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320173, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320173, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320172, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 13:49:37.059: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320173, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320173, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320173, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320172, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 13:49:39.025: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 17 13:49:39.031: INFO: Updating deployment test-recreate-deployment May 17 13:49:39.031: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 17 13:49:39.811: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-9840,SelfLink:/apis/apps/v1/namespaces/deployment-9840/deployments/test-recreate-deployment,UID:657b0914-9f42-4688-9360-f8481e5ad87d,ResourceVersion:11403769,Generation:2,CreationTimestamp:2020-05-17 13:49:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-05-17 13:49:39 +0000 UTC 2020-05-17 13:49:39 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-05-17 13:49:39 +0000 UTC 2020-05-17 13:49:32 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} May 17 13:49:39.815: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-9840,SelfLink:/apis/apps/v1/namespaces/deployment-9840/replicasets/test-recreate-deployment-5c8c9cc69d,UID:f472e20f-8e0c-4954-875a-9378264ba5a6,ResourceVersion:11403768,Generation:1,CreationTimestamp:2020-05-17 13:49:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 657b0914-9f42-4688-9360-f8481e5ad87d 0xc002b69717 0xc002b69718}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 17 13:49:39.815: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 17 13:49:39.815: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-9840,SelfLink:/apis/apps/v1/namespaces/deployment-9840/replicasets/test-recreate-deployment-6df85df6b9,UID:053e2b5f-50a4-4350-9a08-bd6beaa63513,ResourceVersion:11403759,Generation:2,CreationTimestamp:2020-05-17 13:49:32 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment 657b0914-9f42-4688-9360-f8481e5ad87d 0xc002b697e7 0xc002b697e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 17 13:49:39.865: INFO: Pod "test-recreate-deployment-5c8c9cc69d-n26j4" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-n26j4,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-9840,SelfLink:/api/v1/namespaces/deployment-9840/pods/test-recreate-deployment-5c8c9cc69d-n26j4,UID:e0c25326-1d59-460c-a3db-0c99068381e5,ResourceVersion:11403770,Generation:0,CreationTimestamp:2020-05-17 13:49:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d f472e20f-8e0c-4954-875a-9378264ba5a6 0xc00186d8a7 0xc00186d8a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-hgcj9 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-hgcj9,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-hgcj9 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc00186d920} {node.kubernetes.io/unreachable Exists NoExecute 0xc00186d940}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:49:39 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:49:39 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 13:49:39 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:,StartTime:2020-05-17 13:49:39 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:49:39.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9840" for this suite. May 17 13:49:46.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:49:46.525: INFO: namespace deployment-9840 deletion completed in 6.617735948s • [SLOW TEST:13.781 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:49:46.525: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-f05ff06c-3d25-4081-9454-b93491dcd347 STEP: Creating a pod to test consume secrets May 17 13:49:46.835: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4e2349a3-2cba-4f9f-9e26-1989501e07f0" in namespace "projected-1540" to be "success or failure" May 17 13:49:46.850: INFO: Pod "pod-projected-secrets-4e2349a3-2cba-4f9f-9e26-1989501e07f0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.760359ms May 17 13:49:48.853: INFO: Pod "pod-projected-secrets-4e2349a3-2cba-4f9f-9e26-1989501e07f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018223817s May 17 13:49:50.924: INFO: Pod "pod-projected-secrets-4e2349a3-2cba-4f9f-9e26-1989501e07f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089392533s May 17 13:49:52.929: INFO: Pod "pod-projected-secrets-4e2349a3-2cba-4f9f-9e26-1989501e07f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.094048691s STEP: Saw pod success May 17 13:49:52.929: INFO: Pod "pod-projected-secrets-4e2349a3-2cba-4f9f-9e26-1989501e07f0" satisfied condition "success or failure" May 17 13:49:52.932: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-secrets-4e2349a3-2cba-4f9f-9e26-1989501e07f0 container secret-volume-test: STEP: delete the pod May 17 13:49:52.982: INFO: Waiting for pod pod-projected-secrets-4e2349a3-2cba-4f9f-9e26-1989501e07f0 to disappear May 17 13:49:53.061: INFO: Pod pod-projected-secrets-4e2349a3-2cba-4f9f-9e26-1989501e07f0 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:49:53.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1540" for this suite. May 17 13:49:59.114: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:49:59.188: INFO: namespace projected-1540 deletion completed in 6.121930367s • [SLOW TEST:12.662 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:49:59.188: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 17 13:49:59.252: INFO: Waiting up to 5m0s for pod "downward-api-2ce78e0c-8e07-4d88-9683-4d724483c458" in namespace "downward-api-4188" to be "success or failure" May 17 13:49:59.326: INFO: Pod "downward-api-2ce78e0c-8e07-4d88-9683-4d724483c458": Phase="Pending", Reason="", readiness=false. Elapsed: 74.259752ms May 17 13:50:01.331: INFO: Pod "downward-api-2ce78e0c-8e07-4d88-9683-4d724483c458": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079063104s May 17 13:50:03.631: INFO: Pod "downward-api-2ce78e0c-8e07-4d88-9683-4d724483c458": Phase="Pending", Reason="", readiness=false. Elapsed: 4.378948225s May 17 13:50:05.635: INFO: Pod "downward-api-2ce78e0c-8e07-4d88-9683-4d724483c458": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.383245138s STEP: Saw pod success May 17 13:50:05.635: INFO: Pod "downward-api-2ce78e0c-8e07-4d88-9683-4d724483c458" satisfied condition "success or failure" May 17 13:50:05.638: INFO: Trying to get logs from node iruya-worker2 pod downward-api-2ce78e0c-8e07-4d88-9683-4d724483c458 container dapi-container: STEP: delete the pod May 17 13:50:05.686: INFO: Waiting for pod downward-api-2ce78e0c-8e07-4d88-9683-4d724483c458 to disappear May 17 13:50:05.750: INFO: Pod downward-api-2ce78e0c-8e07-4d88-9683-4d724483c458 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:50:05.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4188" for this suite. May 17 13:50:11.805: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:50:11.876: INFO: namespace downward-api-4188 deletion completed in 6.121956957s • [SLOW TEST:12.688 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:50:11.876: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 17 13:50:18.738: INFO: Successfully updated pod "annotationupdate73d51508-f555-4e28-a17a-499274a6d8df" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:50:20.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4518" for this suite. May 17 13:50:44.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:50:44.925: INFO: namespace downward-api-4518 deletion completed in 24.10015334s • [SLOW TEST:33.048 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:50:44.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 17 13:50:45.091: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Registering the sample API server. May 17 13:50:45.975: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 17 13:50:50.093: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320246, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320246, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320246, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320245, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 13:50:52.530: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320246, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320246, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320246, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725320245, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 13:50:54.858: INFO: Waited 620.589087ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:50:55.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-3365" for this suite. May 17 13:51:01.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:51:02.106: INFO: namespace aggregator-3365 deletion completed in 6.316379591s • [SLOW TEST:17.182 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:51:02.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-ba5c2dbe-4cca-46aa-9cd2-5c3ab5603504 STEP: Creating a pod to test consume configMaps May 17 13:51:02.277: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-17a8a695-9b5b-4dfb-98cc-db80a974315d" in namespace "projected-2986" to be "success or failure" May 17 13:51:02.312: INFO: Pod "pod-projected-configmaps-17a8a695-9b5b-4dfb-98cc-db80a974315d": Phase="Pending", Reason="", readiness=false. Elapsed: 34.251483ms May 17 13:51:04.316: INFO: Pod "pod-projected-configmaps-17a8a695-9b5b-4dfb-98cc-db80a974315d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038905742s May 17 13:51:06.321: INFO: Pod "pod-projected-configmaps-17a8a695-9b5b-4dfb-98cc-db80a974315d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043564528s May 17 13:51:08.518: INFO: Pod "pod-projected-configmaps-17a8a695-9b5b-4dfb-98cc-db80a974315d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.240415311s STEP: Saw pod success May 17 13:51:08.518: INFO: Pod "pod-projected-configmaps-17a8a695-9b5b-4dfb-98cc-db80a974315d" satisfied condition "success or failure" May 17 13:51:08.521: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-17a8a695-9b5b-4dfb-98cc-db80a974315d container projected-configmap-volume-test: STEP: delete the pod May 17 13:51:08.730: INFO: Waiting for pod pod-projected-configmaps-17a8a695-9b5b-4dfb-98cc-db80a974315d to disappear May 17 13:51:08.806: INFO: Pod pod-projected-configmaps-17a8a695-9b5b-4dfb-98cc-db80a974315d no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:51:08.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2986" for this suite. May 17 13:51:14.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:51:14.959: INFO: namespace projected-2986 deletion completed in 6.149577757s • [SLOW TEST:12.853 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:51:14.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test override all May 17 13:51:15.108: INFO: Waiting up to 5m0s for pod "client-containers-0a19421c-d338-44d4-99be-cd1cb4130e16" in namespace "containers-5308" to be "success or failure" May 17 13:51:15.219: INFO: Pod "client-containers-0a19421c-d338-44d4-99be-cd1cb4130e16": Phase="Pending", Reason="", readiness=false. Elapsed: 110.556221ms May 17 13:51:17.223: INFO: Pod "client-containers-0a19421c-d338-44d4-99be-cd1cb4130e16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114435607s May 17 13:51:19.226: INFO: Pod "client-containers-0a19421c-d338-44d4-99be-cd1cb4130e16": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118092761s May 17 13:51:21.230: INFO: Pod "client-containers-0a19421c-d338-44d4-99be-cd1cb4130e16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.12195245s STEP: Saw pod success May 17 13:51:21.230: INFO: Pod "client-containers-0a19421c-d338-44d4-99be-cd1cb4130e16" satisfied condition "success or failure" May 17 13:51:21.232: INFO: Trying to get logs from node iruya-worker pod client-containers-0a19421c-d338-44d4-99be-cd1cb4130e16 container test-container: STEP: delete the pod May 17 13:51:21.784: INFO: Waiting for pod client-containers-0a19421c-d338-44d4-99be-cd1cb4130e16 to disappear May 17 13:51:21.824: INFO: Pod client-containers-0a19421c-d338-44d4-99be-cd1cb4130e16 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:51:21.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5308" for this suite. May 17 13:51:27.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:51:28.114: INFO: namespace containers-5308 deletion completed in 6.286498591s • [SLOW TEST:13.154 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:51:28.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-bae18817-e8d1-4e3e-8f49-481d28d6abbf STEP: Creating a pod to test consume configMaps May 17 13:51:28.368: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-66c7a2ba-82bd-4b97-bf7f-312e4a1c173b" in namespace "projected-3485" to be "success or failure" May 17 13:51:28.428: INFO: Pod "pod-projected-configmaps-66c7a2ba-82bd-4b97-bf7f-312e4a1c173b": Phase="Pending", Reason="", readiness=false. Elapsed: 60.61707ms May 17 13:51:30.432: INFO: Pod "pod-projected-configmaps-66c7a2ba-82bd-4b97-bf7f-312e4a1c173b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.064429592s May 17 13:51:32.436: INFO: Pod "pod-projected-configmaps-66c7a2ba-82bd-4b97-bf7f-312e4a1c173b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068255665s May 17 13:51:34.441: INFO: Pod "pod-projected-configmaps-66c7a2ba-82bd-4b97-bf7f-312e4a1c173b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.073199047s STEP: Saw pod success May 17 13:51:34.441: INFO: Pod "pod-projected-configmaps-66c7a2ba-82bd-4b97-bf7f-312e4a1c173b" satisfied condition "success or failure" May 17 13:51:34.445: INFO: Trying to get logs from node iruya-worker2 pod pod-projected-configmaps-66c7a2ba-82bd-4b97-bf7f-312e4a1c173b container projected-configmap-volume-test: STEP: delete the pod May 17 13:51:34.616: INFO: Waiting for pod pod-projected-configmaps-66c7a2ba-82bd-4b97-bf7f-312e4a1c173b to disappear May 17 13:51:34.678: INFO: Pod pod-projected-configmaps-66c7a2ba-82bd-4b97-bf7f-312e4a1c173b no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:51:34.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3485" for this suite. May 17 13:51:40.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:51:40.879: INFO: namespace projected-3485 deletion completed in 6.196551722s • [SLOW TEST:12.765 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:51:40.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating api versions May 17 13:51:41.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 17 13:51:41.231: INFO: stderr: "" May 17 13:51:41.231: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:51:41.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4146" for this suite. May 17 13:51:47.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:51:47.392: INFO: namespace kubectl-4146 deletion completed in 6.129778638s • [SLOW TEST:6.511 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl api-versions /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:51:47.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test substitution in container's command May 17 13:51:47.665: INFO: Waiting up to 5m0s for pod "var-expansion-41aa21a0-9fae-4f6f-bc9a-e8b03acfd011" in namespace "var-expansion-5821" to be "success or failure" May 17 13:51:47.671: INFO: Pod "var-expansion-41aa21a0-9fae-4f6f-bc9a-e8b03acfd011": Phase="Pending", Reason="", readiness=false. Elapsed: 5.918714ms May 17 13:51:49.676: INFO: Pod "var-expansion-41aa21a0-9fae-4f6f-bc9a-e8b03acfd011": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010448389s May 17 13:51:51.680: INFO: Pod "var-expansion-41aa21a0-9fae-4f6f-bc9a-e8b03acfd011": Phase="Pending", Reason="", readiness=false. Elapsed: 4.01479506s May 17 13:51:53.685: INFO: Pod "var-expansion-41aa21a0-9fae-4f6f-bc9a-e8b03acfd011": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.019911633s STEP: Saw pod success May 17 13:51:53.685: INFO: Pod "var-expansion-41aa21a0-9fae-4f6f-bc9a-e8b03acfd011" satisfied condition "success or failure" May 17 13:51:53.687: INFO: Trying to get logs from node iruya-worker pod var-expansion-41aa21a0-9fae-4f6f-bc9a-e8b03acfd011 container dapi-container: STEP: delete the pod May 17 13:51:53.726: INFO: Waiting for pod var-expansion-41aa21a0-9fae-4f6f-bc9a-e8b03acfd011 to disappear May 17 13:51:53.742: INFO: Pod var-expansion-41aa21a0-9fae-4f6f-bc9a-e8b03acfd011 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:51:53.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5821" for this suite. May 17 13:51:59.851: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:51:59.920: INFO: namespace var-expansion-5821 deletion completed in 6.174337088s • [SLOW TEST:12.528 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:51:59.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210 STEP: creating the pod May 17 13:52:00.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6542' May 17 13:52:03.811: INFO: stderr: "" May 17 13:52:03.811: INFO: stdout: "pod/pause created\n" May 17 13:52:03.811: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 17 13:52:03.811: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6542" to be "running and ready" May 17 13:52:03.915: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 103.710493ms May 17 13:52:05.918: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10684768s May 17 13:52:07.922: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110990183s May 17 13:52:09.926: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 6.114714399s May 17 13:52:09.926: INFO: Pod "pause" satisfied condition "running and ready" May 17 13:52:09.926: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: adding the label testing-label with value testing-label-value to a pod May 17 13:52:09.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6542' May 17 13:52:10.027: INFO: stderr: "" May 17 13:52:10.027: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 17 13:52:10.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6542' May 17 13:52:10.116: INFO: stderr: "" May 17 13:52:10.116: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s testing-label-value\n" STEP: removing the label testing-label of a pod May 17 13:52:10.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6542' May 17 13:52:10.225: INFO: stderr: "" May 17 13:52:10.225: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 17 13:52:10.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6542' May 17 13:52:10.359: INFO: stderr: "" May 17 13:52:10.359: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 7s \n" [AfterEach] [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217 STEP: using delete to clean up resources May 17 13:52:10.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6542' May 17 13:52:10.623: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 13:52:10.623: INFO: stdout: "pod \"pause\" force deleted\n" May 17 13:52:10.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6542' May 17 13:52:10.732: INFO: stderr: "No resources found.\n" May 17 13:52:10.732: INFO: stdout: "" May 17 13:52:10.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6542 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 17 13:52:10.823: INFO: stderr: "" May 17 13:52:10.823: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:52:10.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6542" for this suite. May 17 13:52:16.895: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:52:17.005: INFO: namespace kubectl-6542 deletion completed in 6.178684636s • [SLOW TEST:17.085 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:52:17.006: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0517 13:52:18.296535 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 17 13:52:18.296: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:52:18.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4451" for this suite. May 17 13:52:24.427: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:52:24.495: INFO: namespace gc-4451 deletion completed in 6.195391477s • [SLOW TEST:7.489 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:52:24.495: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 17 13:52:24.738: INFO: Waiting up to 5m0s for pod "pod-21318901-3ac6-4d8f-a74f-5b934625f5eb" in namespace "emptydir-1288" to be "success or failure" May 17 13:52:24.795: INFO: Pod "pod-21318901-3ac6-4d8f-a74f-5b934625f5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 56.41375ms May 17 13:52:26.864: INFO: Pod "pod-21318901-3ac6-4d8f-a74f-5b934625f5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.12614884s May 17 13:52:29.064: INFO: Pod "pod-21318901-3ac6-4d8f-a74f-5b934625f5eb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.325835641s May 17 13:52:31.068: INFO: Pod "pod-21318901-3ac6-4d8f-a74f-5b934625f5eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.329710032s STEP: Saw pod success May 17 13:52:31.068: INFO: Pod "pod-21318901-3ac6-4d8f-a74f-5b934625f5eb" satisfied condition "success or failure" May 17 13:52:31.070: INFO: Trying to get logs from node iruya-worker pod pod-21318901-3ac6-4d8f-a74f-5b934625f5eb container test-container: STEP: delete the pod May 17 13:52:31.253: INFO: Waiting for pod pod-21318901-3ac6-4d8f-a74f-5b934625f5eb to disappear May 17 13:52:31.269: INFO: Pod pod-21318901-3ac6-4d8f-a74f-5b934625f5eb no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:52:31.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1288" for this suite. May 17 13:52:37.308: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:52:37.370: INFO: namespace emptydir-1288 deletion completed in 6.098578073s • [SLOW TEST:12.875 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:52:37.371: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 17 13:52:37.476: INFO: Waiting up to 5m0s for pod "downward-api-c72c5811-9380-47ca-82ff-b569b5e64903" in namespace "downward-api-4841" to be "success or failure" May 17 13:52:37.503: INFO: Pod "downward-api-c72c5811-9380-47ca-82ff-b569b5e64903": Phase="Pending", Reason="", readiness=false. Elapsed: 27.399446ms May 17 13:52:39.508: INFO: Pod "downward-api-c72c5811-9380-47ca-82ff-b569b5e64903": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032043281s May 17 13:52:41.512: INFO: Pod "downward-api-c72c5811-9380-47ca-82ff-b569b5e64903": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036475273s May 17 13:52:43.687: INFO: Pod "downward-api-c72c5811-9380-47ca-82ff-b569b5e64903": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.211494751s STEP: Saw pod success May 17 13:52:43.687: INFO: Pod "downward-api-c72c5811-9380-47ca-82ff-b569b5e64903" satisfied condition "success or failure" May 17 13:52:43.690: INFO: Trying to get logs from node iruya-worker2 pod downward-api-c72c5811-9380-47ca-82ff-b569b5e64903 container dapi-container: STEP: delete the pod May 17 13:52:43.752: INFO: Waiting for pod downward-api-c72c5811-9380-47ca-82ff-b569b5e64903 to disappear May 17 13:52:43.872: INFO: Pod downward-api-c72c5811-9380-47ca-82ff-b569b5e64903 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:52:43.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4841" for this suite. May 17 13:52:49.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:52:49.979: INFO: namespace downward-api-4841 deletion completed in 6.103160541s • [SLOW TEST:12.608 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:52:49.980: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-projected-all-test-volume-9c0c6ff6-f9b1-4e53-8494-7d474e6ef1db STEP: Creating secret with name secret-projected-all-test-volume-fe9c6995-d3d7-48c7-ac9d-a4a2fcf65583 STEP: Creating a pod to test Check all projections for projected volume plugin May 17 13:52:50.278: INFO: Waiting up to 5m0s for pod "projected-volume-8510343c-8997-47b8-8fc4-95e523601db6" in namespace "projected-7336" to be "success or failure" May 17 13:52:50.358: INFO: Pod "projected-volume-8510343c-8997-47b8-8fc4-95e523601db6": Phase="Pending", Reason="", readiness=false. Elapsed: 79.281837ms May 17 13:52:52.363: INFO: Pod "projected-volume-8510343c-8997-47b8-8fc4-95e523601db6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084477501s May 17 13:52:54.367: INFO: Pod "projected-volume-8510343c-8997-47b8-8fc4-95e523601db6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088844126s May 17 13:52:56.424: INFO: Pod "projected-volume-8510343c-8997-47b8-8fc4-95e523601db6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.145981688s STEP: Saw pod success May 17 13:52:56.424: INFO: Pod "projected-volume-8510343c-8997-47b8-8fc4-95e523601db6" satisfied condition "success or failure" May 17 13:52:56.427: INFO: Trying to get logs from node iruya-worker pod projected-volume-8510343c-8997-47b8-8fc4-95e523601db6 container projected-all-volume-test: STEP: delete the pod May 17 13:52:56.513: INFO: Waiting for pod projected-volume-8510343c-8997-47b8-8fc4-95e523601db6 to disappear May 17 13:52:56.639: INFO: Pod projected-volume-8510343c-8997-47b8-8fc4-95e523601db6 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:52:56.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7336" for this suite. May 17 13:53:02.888: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:53:02.957: INFO: namespace projected-7336 deletion completed in 6.313897248s • [SLOW TEST:12.978 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:53:02.958: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-89ef9ec3-fffa-4cb5-9c7c-aed7dc40833f STEP: Creating a pod to test consume configMaps May 17 13:53:03.135: INFO: Waiting up to 5m0s for pod "pod-configmaps-6def3e7f-927b-405a-b8df-0c781f4e1f56" in namespace "configmap-4767" to be "success or failure" May 17 13:53:03.188: INFO: Pod "pod-configmaps-6def3e7f-927b-405a-b8df-0c781f4e1f56": Phase="Pending", Reason="", readiness=false. Elapsed: 53.172839ms May 17 13:53:05.193: INFO: Pod "pod-configmaps-6def3e7f-927b-405a-b8df-0c781f4e1f56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05767259s May 17 13:53:07.197: INFO: Pod "pod-configmaps-6def3e7f-927b-405a-b8df-0c781f4e1f56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062624239s May 17 13:53:09.201: INFO: Pod "pod-configmaps-6def3e7f-927b-405a-b8df-0c781f4e1f56": Phase="Running", Reason="", readiness=true. Elapsed: 6.066131816s May 17 13:53:11.205: INFO: Pod "pod-configmaps-6def3e7f-927b-405a-b8df-0c781f4e1f56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0696993s STEP: Saw pod success May 17 13:53:11.205: INFO: Pod "pod-configmaps-6def3e7f-927b-405a-b8df-0c781f4e1f56" satisfied condition "success or failure" May 17 13:53:11.207: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6def3e7f-927b-405a-b8df-0c781f4e1f56 container configmap-volume-test: STEP: delete the pod May 17 13:53:11.259: INFO: Waiting for pod pod-configmaps-6def3e7f-927b-405a-b8df-0c781f4e1f56 to disappear May 17 13:53:11.322: INFO: Pod pod-configmaps-6def3e7f-927b-405a-b8df-0c781f4e1f56 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:53:11.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4767" for this suite. May 17 13:53:17.370: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:53:17.425: INFO: namespace configmap-4767 deletion completed in 6.100674222s • [SLOW TEST:14.468 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:53:17.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 13:53:17.599: INFO: Waiting up to 5m0s for pod "downwardapi-volume-59101633-091d-4112-a4f6-b6ed17f62a9e" in namespace "projected-6505" to be "success or failure" May 17 13:53:17.605: INFO: Pod "downwardapi-volume-59101633-091d-4112-a4f6-b6ed17f62a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.77031ms May 17 13:53:19.610: INFO: Pod "downwardapi-volume-59101633-091d-4112-a4f6-b6ed17f62a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010531127s May 17 13:53:21.652: INFO: Pod "downwardapi-volume-59101633-091d-4112-a4f6-b6ed17f62a9e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05211451s May 17 13:53:23.786: INFO: Pod "downwardapi-volume-59101633-091d-4112-a4f6-b6ed17f62a9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.186174015s STEP: Saw pod success May 17 13:53:23.786: INFO: Pod "downwardapi-volume-59101633-091d-4112-a4f6-b6ed17f62a9e" satisfied condition "success or failure" May 17 13:53:23.788: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-59101633-091d-4112-a4f6-b6ed17f62a9e container client-container: STEP: delete the pod May 17 13:53:23.843: INFO: Waiting for pod downwardapi-volume-59101633-091d-4112-a4f6-b6ed17f62a9e to disappear May 17 13:53:23.894: INFO: Pod downwardapi-volume-59101633-091d-4112-a4f6-b6ed17f62a9e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:53:23.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6505" for this suite. May 17 13:53:30.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:53:30.415: INFO: namespace projected-6505 deletion completed in 6.516713122s • [SLOW TEST:12.990 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:53:30.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8359.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8359.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8359.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8359.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 13:53:40.631: INFO: DNS probes using dns-test-bfe735fe-507e-4bef-8dd8-3fa4a56077be succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8359.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8359.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8359.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8359.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 13:53:50.883: INFO: File wheezy_udp@dns-test-service-3.dns-8359.svc.cluster.local from pod dns-8359/dns-test-4939c0c9-352a-408b-93e6-2ae405669c52 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 13:53:50.886: INFO: File jessie_udp@dns-test-service-3.dns-8359.svc.cluster.local from pod dns-8359/dns-test-4939c0c9-352a-408b-93e6-2ae405669c52 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 13:53:50.886: INFO: Lookups using dns-8359/dns-test-4939c0c9-352a-408b-93e6-2ae405669c52 failed for: [wheezy_udp@dns-test-service-3.dns-8359.svc.cluster.local jessie_udp@dns-test-service-3.dns-8359.svc.cluster.local] May 17 13:53:55.910: INFO: File wheezy_udp@dns-test-service-3.dns-8359.svc.cluster.local from pod dns-8359/dns-test-4939c0c9-352a-408b-93e6-2ae405669c52 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 13:53:55.914: INFO: File jessie_udp@dns-test-service-3.dns-8359.svc.cluster.local from pod dns-8359/dns-test-4939c0c9-352a-408b-93e6-2ae405669c52 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 13:53:55.914: INFO: Lookups using dns-8359/dns-test-4939c0c9-352a-408b-93e6-2ae405669c52 failed for: [wheezy_udp@dns-test-service-3.dns-8359.svc.cluster.local jessie_udp@dns-test-service-3.dns-8359.svc.cluster.local] May 17 13:54:00.890: INFO: File wheezy_udp@dns-test-service-3.dns-8359.svc.cluster.local from pod dns-8359/dns-test-4939c0c9-352a-408b-93e6-2ae405669c52 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 13:54:00.893: INFO: File jessie_udp@dns-test-service-3.dns-8359.svc.cluster.local from pod dns-8359/dns-test-4939c0c9-352a-408b-93e6-2ae405669c52 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 13:54:00.893: INFO: Lookups using dns-8359/dns-test-4939c0c9-352a-408b-93e6-2ae405669c52 failed for: [wheezy_udp@dns-test-service-3.dns-8359.svc.cluster.local jessie_udp@dns-test-service-3.dns-8359.svc.cluster.local] May 17 13:54:05.928: INFO: File wheezy_udp@dns-test-service-3.dns-8359.svc.cluster.local from pod dns-8359/dns-test-4939c0c9-352a-408b-93e6-2ae405669c52 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 13:54:05.940: INFO: File jessie_udp@dns-test-service-3.dns-8359.svc.cluster.local from pod dns-8359/dns-test-4939c0c9-352a-408b-93e6-2ae405669c52 contains 'foo.example.com. ' instead of 'bar.example.com.' May 17 13:54:05.940: INFO: Lookups using dns-8359/dns-test-4939c0c9-352a-408b-93e6-2ae405669c52 failed for: [wheezy_udp@dns-test-service-3.dns-8359.svc.cluster.local jessie_udp@dns-test-service-3.dns-8359.svc.cluster.local] May 17 13:54:10.893: INFO: DNS probes using dns-test-4939c0c9-352a-408b-93e6-2ae405669c52 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8359.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8359.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8359.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8359.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 13:54:20.260: INFO: DNS probes using dns-test-2a37877d-c509-4522-981c-57d9cb4ff6a2 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:54:20.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8359" for this suite. May 17 13:54:28.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:54:28.598: INFO: namespace dns-8359 deletion completed in 8.217467688s • [SLOW TEST:58.182 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:54:28.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-533dc0ed-5949-4e09-afb7-caf33622605a STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:54:36.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3304" for this suite. May 17 13:55:06.917: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:55:06.994: INFO: namespace configmap-3304 deletion completed in 30.124187493s • [SLOW TEST:38.396 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:55:06.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-b3be73a1-7128-4892-9aa1-675caff74fda in namespace container-probe-8255 May 17 13:55:13.278: INFO: Started pod test-webserver-b3be73a1-7128-4892-9aa1-675caff74fda in namespace container-probe-8255 STEP: checking the pod's current state and verifying that restartCount is present May 17 13:55:13.281: INFO: Initial restart count of pod test-webserver-b3be73a1-7128-4892-9aa1-675caff74fda is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 13:59:14.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8255" for this suite. May 17 13:59:20.076: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 13:59:20.145: INFO: namespace container-probe-8255 deletion completed in 6.107075661s • [SLOW TEST:253.151 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 13:59:20.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-6157187d-00a7-40d8-b93a-ed4b2240bdfc in namespace container-probe-2013 May 17 13:59:24.257: INFO: Started pod busybox-6157187d-00a7-40d8-b93a-ed4b2240bdfc in namespace container-probe-2013 STEP: checking the pod's current state and verifying that restartCount is present May 17 13:59:24.259: INFO: Initial restart count of pod busybox-6157187d-00a7-40d8-b93a-ed4b2240bdfc is 0 May 17 14:00:14.375: INFO: Restart count of pod container-probe-2013/busybox-6157187d-00a7-40d8-b93a-ed4b2240bdfc is now 1 (50.11580534s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:00:14.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2013" for this suite. May 17 14:00:20.438: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:00:20.506: INFO: namespace container-probe-2013 deletion completed in 6.088293732s • [SLOW TEST:60.361 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:00:20.507: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 17 14:00:20.558: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 17 14:00:20.575: INFO: Waiting for terminating namespaces to be deleted... May 17 14:00:20.577: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 17 14:00:20.583: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 17 14:00:20.583: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:00:20.583: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 17 14:00:20.583: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:00:20.583: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 17 14:00:20.590: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 17 14:00:20.590: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:00:20.590: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 17 14:00:20.590: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:00:20.590: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 17 14:00:20.590: INFO: Container coredns ready: true, restart count 0 May 17 14:00:20.590: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 17 14:00:20.590: INFO: Container coredns ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2ea40b8e-53e0-4157-9ddf-699f0b8bb69d 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-2ea40b8e-53e0-4157-9ddf-699f0b8bb69d off the node iruya-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-2ea40b8e-53e0-4157-9ddf-699f0b8bb69d [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:00:28.748: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-8777" for this suite. May 17 14:00:36.766: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:00:36.860: INFO: namespace sched-pred-8777 deletion completed in 8.108273192s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:16.353 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:00:36.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 17 14:00:41.448: INFO: Successfully updated pod "labelsupdateab5d91f1-a0bd-4567-9893-5f7aedc875f9" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:00:43.471: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6546" for this suite. May 17 14:01:05.523: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:01:05.599: INFO: namespace downward-api-6546 deletion completed in 22.124804406s • [SLOW TEST:28.739 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:01:05.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 17 14:01:09.706: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-4f47c7fe-18f1-4312-9c36-c29f7433aac0,GenerateName:,Namespace:events-7584,SelfLink:/api/v1/namespaces/events-7584/pods/send-events-4f47c7fe-18f1-4312-9c36-c29f7433aac0,UID:272710c6-1f17-4b23-b15f-e232d8387171,ResourceVersion:11405854,Generation:0,CreationTimestamp:2020-05-17 14:01:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 651426140,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-nf6fk {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-nf6fk,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] [] [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-nf6fk true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002a66320} {node.kubernetes.io/unreachable Exists NoExecute 0xc002a66340}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:01:05 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:01:09 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:01:09 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:01:05 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.6,StartTime:2020-05-17 14:01:05 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-05-17 14:01:08 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 containerd://9c61188dd0b6ce034df52d5ee09d43a40c769c69cb54bb693124080e5c864079}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} STEP: checking for scheduler event about the pod May 17 14:01:11.711: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 17 14:01:13.715: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:01:13.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-7584" for this suite. May 17 14:01:51.804: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:01:51.879: INFO: namespace events-7584 deletion completed in 38.151404712s • [SLOW TEST:46.279 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:01:51.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 17 14:01:51.948: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:01:59.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9673" for this suite. May 17 14:02:05.366: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:02:05.439: INFO: namespace init-container-9673 deletion completed in 6.102798777s • [SLOW TEST:13.559 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:02:05.439: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-secret-7ndv STEP: Creating a pod to test atomic-volume-subpath May 17 14:02:05.585: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-7ndv" in namespace "subpath-2410" to be "success or failure" May 17 14:02:05.597: INFO: Pod "pod-subpath-test-secret-7ndv": Phase="Pending", Reason="", readiness=false. Elapsed: 12.107747ms May 17 14:02:07.771: INFO: Pod "pod-subpath-test-secret-7ndv": Phase="Pending", Reason="", readiness=false. Elapsed: 2.186188879s May 17 14:02:09.775: INFO: Pod "pod-subpath-test-secret-7ndv": Phase="Running", Reason="", readiness=true. Elapsed: 4.190272068s May 17 14:02:11.779: INFO: Pod "pod-subpath-test-secret-7ndv": Phase="Running", Reason="", readiness=true. Elapsed: 6.194383438s May 17 14:02:13.784: INFO: Pod "pod-subpath-test-secret-7ndv": Phase="Running", Reason="", readiness=true. Elapsed: 8.198675369s May 17 14:02:15.787: INFO: Pod "pod-subpath-test-secret-7ndv": Phase="Running", Reason="", readiness=true. Elapsed: 10.202440834s May 17 14:02:17.791: INFO: Pod "pod-subpath-test-secret-7ndv": Phase="Running", Reason="", readiness=true. Elapsed: 12.206582288s May 17 14:02:19.796: INFO: Pod "pod-subpath-test-secret-7ndv": Phase="Running", Reason="", readiness=true. Elapsed: 14.210887883s May 17 14:02:21.801: INFO: Pod "pod-subpath-test-secret-7ndv": Phase="Running", Reason="", readiness=true. Elapsed: 16.215686408s May 17 14:02:23.805: INFO: Pod "pod-subpath-test-secret-7ndv": Phase="Running", Reason="", readiness=true. Elapsed: 18.219965231s May 17 14:02:25.809: INFO: Pod "pod-subpath-test-secret-7ndv": Phase="Running", Reason="", readiness=true. Elapsed: 20.224298217s May 17 14:02:27.813: INFO: Pod "pod-subpath-test-secret-7ndv": Phase="Running", Reason="", readiness=true. Elapsed: 22.227972675s May 17 14:02:29.816: INFO: Pod "pod-subpath-test-secret-7ndv": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.231231204s STEP: Saw pod success May 17 14:02:29.816: INFO: Pod "pod-subpath-test-secret-7ndv" satisfied condition "success or failure" May 17 14:02:29.818: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-secret-7ndv container test-container-subpath-secret-7ndv: STEP: delete the pod May 17 14:02:29.877: INFO: Waiting for pod pod-subpath-test-secret-7ndv to disappear May 17 14:02:29.880: INFO: Pod pod-subpath-test-secret-7ndv no longer exists STEP: Deleting pod pod-subpath-test-secret-7ndv May 17 14:02:29.880: INFO: Deleting pod "pod-subpath-test-secret-7ndv" in namespace "subpath-2410" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:02:29.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2410" for this suite. May 17 14:02:35.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:02:35.978: INFO: namespace subpath-2410 deletion completed in 6.093835622s • [SLOW TEST:30.539 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:02:35.978: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 14:02:36.015: INFO: Waiting up to 5m0s for pod "downwardapi-volume-778fcc41-5bf0-47da-a055-43e7d819049c" in namespace "projected-5021" to be "success or failure" May 17 14:02:36.061: INFO: Pod "downwardapi-volume-778fcc41-5bf0-47da-a055-43e7d819049c": Phase="Pending", Reason="", readiness=false. Elapsed: 45.647702ms May 17 14:02:38.065: INFO: Pod "downwardapi-volume-778fcc41-5bf0-47da-a055-43e7d819049c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050149591s May 17 14:02:40.069: INFO: Pod "downwardapi-volume-778fcc41-5bf0-47da-a055-43e7d819049c": Phase="Running", Reason="", readiness=true. Elapsed: 4.054470321s May 17 14:02:42.074: INFO: Pod "downwardapi-volume-778fcc41-5bf0-47da-a055-43e7d819049c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058581791s STEP: Saw pod success May 17 14:02:42.074: INFO: Pod "downwardapi-volume-778fcc41-5bf0-47da-a055-43e7d819049c" satisfied condition "success or failure" May 17 14:02:42.076: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-778fcc41-5bf0-47da-a055-43e7d819049c container client-container: STEP: delete the pod May 17 14:02:42.107: INFO: Waiting for pod downwardapi-volume-778fcc41-5bf0-47da-a055-43e7d819049c to disappear May 17 14:02:42.114: INFO: Pod downwardapi-volume-778fcc41-5bf0-47da-a055-43e7d819049c no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:02:42.114: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5021" for this suite. May 17 14:02:48.130: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:02:48.208: INFO: namespace projected-5021 deletion completed in 6.089627218s • [SLOW TEST:12.230 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:02:48.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0517 14:03:00.645106 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 17 14:03:00.645: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:03:00.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-8777" for this suite. May 17 14:03:07.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:03:07.123: INFO: namespace gc-8777 deletion completed in 6.475290806s • [SLOW TEST:18.915 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:03:07.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-5b07ce86-4e73-4702-81e3-d75a0b420b8b in namespace container-probe-5650 May 17 14:03:11.447: INFO: Started pod busybox-5b07ce86-4e73-4702-81e3-d75a0b420b8b in namespace container-probe-5650 STEP: checking the pod's current state and verifying that restartCount is present May 17 14:03:11.449: INFO: Initial restart count of pod busybox-5b07ce86-4e73-4702-81e3-d75a0b420b8b is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:07:11.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5650" for this suite. May 17 14:07:18.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:07:18.121: INFO: namespace container-probe-5650 deletion completed in 6.124848841s • [SLOW TEST:250.998 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:07:18.122: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 14:07:18.214: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b5511c1-f723-444d-b49e-cff4b139a38f" in namespace "projected-3191" to be "success or failure" May 17 14:07:18.229: INFO: Pod "downwardapi-volume-3b5511c1-f723-444d-b49e-cff4b139a38f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.985821ms May 17 14:07:20.234: INFO: Pod "downwardapi-volume-3b5511c1-f723-444d-b49e-cff4b139a38f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019574199s May 17 14:07:22.238: INFO: Pod "downwardapi-volume-3b5511c1-f723-444d-b49e-cff4b139a38f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024266812s STEP: Saw pod success May 17 14:07:22.238: INFO: Pod "downwardapi-volume-3b5511c1-f723-444d-b49e-cff4b139a38f" satisfied condition "success or failure" May 17 14:07:22.243: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-3b5511c1-f723-444d-b49e-cff4b139a38f container client-container: STEP: delete the pod May 17 14:07:22.312: INFO: Waiting for pod downwardapi-volume-3b5511c1-f723-444d-b49e-cff4b139a38f to disappear May 17 14:07:22.326: INFO: Pod downwardapi-volume-3b5511c1-f723-444d-b49e-cff4b139a38f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:07:22.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3191" for this suite. May 17 14:07:28.342: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:07:28.420: INFO: namespace projected-3191 deletion completed in 6.090628733s • [SLOW TEST:10.298 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:07:28.420: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 14:07:28.512: INFO: Waiting up to 5m0s for pod "downwardapi-volume-cdfb4912-292b-443e-8a2a-ce805dfb3cb1" in namespace "downward-api-6697" to be "success or failure" May 17 14:07:28.516: INFO: Pod "downwardapi-volume-cdfb4912-292b-443e-8a2a-ce805dfb3cb1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.120376ms May 17 14:07:30.565: INFO: Pod "downwardapi-volume-cdfb4912-292b-443e-8a2a-ce805dfb3cb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052991096s May 17 14:07:32.570: INFO: Pod "downwardapi-volume-cdfb4912-292b-443e-8a2a-ce805dfb3cb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057315597s STEP: Saw pod success May 17 14:07:32.570: INFO: Pod "downwardapi-volume-cdfb4912-292b-443e-8a2a-ce805dfb3cb1" satisfied condition "success or failure" May 17 14:07:32.572: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-cdfb4912-292b-443e-8a2a-ce805dfb3cb1 container client-container: STEP: delete the pod May 17 14:07:32.620: INFO: Waiting for pod downwardapi-volume-cdfb4912-292b-443e-8a2a-ce805dfb3cb1 to disappear May 17 14:07:32.629: INFO: Pod downwardapi-volume-cdfb4912-292b-443e-8a2a-ce805dfb3cb1 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:07:32.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6697" for this suite. May 17 14:07:38.819: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:07:38.891: INFO: namespace downward-api-6697 deletion completed in 6.258888761s • [SLOW TEST:10.470 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:07:38.892: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 14:07:38.937: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df045b7c-a096-4f23-a756-4c949fec5fc6" in namespace "downward-api-9862" to be "success or failure" May 17 14:07:38.954: INFO: Pod "downwardapi-volume-df045b7c-a096-4f23-a756-4c949fec5fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 17.118509ms May 17 14:07:40.958: INFO: Pod "downwardapi-volume-df045b7c-a096-4f23-a756-4c949fec5fc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020767411s May 17 14:07:42.962: INFO: Pod "downwardapi-volume-df045b7c-a096-4f23-a756-4c949fec5fc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024803635s STEP: Saw pod success May 17 14:07:42.962: INFO: Pod "downwardapi-volume-df045b7c-a096-4f23-a756-4c949fec5fc6" satisfied condition "success or failure" May 17 14:07:42.965: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-df045b7c-a096-4f23-a756-4c949fec5fc6 container client-container: STEP: delete the pod May 17 14:07:43.005: INFO: Waiting for pod downwardapi-volume-df045b7c-a096-4f23-a756-4c949fec5fc6 to disappear May 17 14:07:43.027: INFO: Pod downwardapi-volume-df045b7c-a096-4f23-a756-4c949fec5fc6 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:07:43.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9862" for this suite. May 17 14:07:49.065: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:07:49.144: INFO: namespace downward-api-9862 deletion completed in 6.114311686s • [SLOW TEST:10.253 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:07:49.145: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 14:07:49.221: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a145d13e-db46-4705-b2cf-9412387bd916" in namespace "projected-4189" to be "success or failure" May 17 14:07:49.224: INFO: Pod "downwardapi-volume-a145d13e-db46-4705-b2cf-9412387bd916": Phase="Pending", Reason="", readiness=false. Elapsed: 3.428941ms May 17 14:07:51.228: INFO: Pod "downwardapi-volume-a145d13e-db46-4705-b2cf-9412387bd916": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006789985s May 17 14:07:53.232: INFO: Pod "downwardapi-volume-a145d13e-db46-4705-b2cf-9412387bd916": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010920146s STEP: Saw pod success May 17 14:07:53.232: INFO: Pod "downwardapi-volume-a145d13e-db46-4705-b2cf-9412387bd916" satisfied condition "success or failure" May 17 14:07:53.235: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-a145d13e-db46-4705-b2cf-9412387bd916 container client-container: STEP: delete the pod May 17 14:07:53.292: INFO: Waiting for pod downwardapi-volume-a145d13e-db46-4705-b2cf-9412387bd916 to disappear May 17 14:07:53.296: INFO: Pod downwardapi-volume-a145d13e-db46-4705-b2cf-9412387bd916 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:07:53.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4189" for this suite. May 17 14:07:59.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:07:59.406: INFO: namespace projected-4189 deletion completed in 6.10294702s • [SLOW TEST:10.261 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:07:59.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 17 14:07:59.494: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8415,SelfLink:/api/v1/namespaces/watch-8415/configmaps/e2e-watch-test-configmap-a,UID:c89ce150-11e7-45a3-8d3d-4312e7fde6ca,ResourceVersion:11407061,Generation:0,CreationTimestamp:2020-05-17 14:07:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 17 14:07:59.495: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8415,SelfLink:/api/v1/namespaces/watch-8415/configmaps/e2e-watch-test-configmap-a,UID:c89ce150-11e7-45a3-8d3d-4312e7fde6ca,ResourceVersion:11407061,Generation:0,CreationTimestamp:2020-05-17 14:07:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 17 14:08:09.501: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8415,SelfLink:/api/v1/namespaces/watch-8415/configmaps/e2e-watch-test-configmap-a,UID:c89ce150-11e7-45a3-8d3d-4312e7fde6ca,ResourceVersion:11407081,Generation:0,CreationTimestamp:2020-05-17 14:07:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 17 14:08:09.501: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8415,SelfLink:/api/v1/namespaces/watch-8415/configmaps/e2e-watch-test-configmap-a,UID:c89ce150-11e7-45a3-8d3d-4312e7fde6ca,ResourceVersion:11407081,Generation:0,CreationTimestamp:2020-05-17 14:07:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 17 14:08:19.510: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8415,SelfLink:/api/v1/namespaces/watch-8415/configmaps/e2e-watch-test-configmap-a,UID:c89ce150-11e7-45a3-8d3d-4312e7fde6ca,ResourceVersion:11407100,Generation:0,CreationTimestamp:2020-05-17 14:07:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 17 14:08:19.510: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8415,SelfLink:/api/v1/namespaces/watch-8415/configmaps/e2e-watch-test-configmap-a,UID:c89ce150-11e7-45a3-8d3d-4312e7fde6ca,ResourceVersion:11407100,Generation:0,CreationTimestamp:2020-05-17 14:07:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 17 14:08:29.517: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8415,SelfLink:/api/v1/namespaces/watch-8415/configmaps/e2e-watch-test-configmap-a,UID:c89ce150-11e7-45a3-8d3d-4312e7fde6ca,ResourceVersion:11407122,Generation:0,CreationTimestamp:2020-05-17 14:07:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 17 14:08:29.517: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-8415,SelfLink:/api/v1/namespaces/watch-8415/configmaps/e2e-watch-test-configmap-a,UID:c89ce150-11e7-45a3-8d3d-4312e7fde6ca,ResourceVersion:11407122,Generation:0,CreationTimestamp:2020-05-17 14:07:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 17 14:08:39.525: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8415,SelfLink:/api/v1/namespaces/watch-8415/configmaps/e2e-watch-test-configmap-b,UID:b6448234-3a0d-47c0-bf80-b16947832517,ResourceVersion:11407142,Generation:0,CreationTimestamp:2020-05-17 14:08:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 17 14:08:39.525: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8415,SelfLink:/api/v1/namespaces/watch-8415/configmaps/e2e-watch-test-configmap-b,UID:b6448234-3a0d-47c0-bf80-b16947832517,ResourceVersion:11407142,Generation:0,CreationTimestamp:2020-05-17 14:08:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 17 14:08:49.533: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8415,SelfLink:/api/v1/namespaces/watch-8415/configmaps/e2e-watch-test-configmap-b,UID:b6448234-3a0d-47c0-bf80-b16947832517,ResourceVersion:11407163,Generation:0,CreationTimestamp:2020-05-17 14:08:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} May 17 14:08:49.533: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-8415,SelfLink:/api/v1/namespaces/watch-8415/configmaps/e2e-watch-test-configmap-b,UID:b6448234-3a0d-47c0-bf80-b16947832517,ResourceVersion:11407163,Generation:0,CreationTimestamp:2020-05-17 14:08:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:08:59.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-8415" for this suite. May 17 14:09:05.572: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:09:05.650: INFO: namespace watch-8415 deletion completed in 6.110909106s • [SLOW TEST:66.244 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:09:05.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-2ed18d60-5966-44f2-bb8c-9c3617e296bf STEP: Creating a pod to test consume secrets May 17 14:09:05.724: INFO: Waiting up to 5m0s for pod "pod-secrets-00301c64-61dd-4857-80fc-5d4e41dead83" in namespace "secrets-1644" to be "success or failure" May 17 14:09:05.728: INFO: Pod "pod-secrets-00301c64-61dd-4857-80fc-5d4e41dead83": Phase="Pending", Reason="", readiness=false. Elapsed: 3.529651ms May 17 14:09:07.732: INFO: Pod "pod-secrets-00301c64-61dd-4857-80fc-5d4e41dead83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00751766s May 17 14:09:09.737: INFO: Pod "pod-secrets-00301c64-61dd-4857-80fc-5d4e41dead83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012543858s STEP: Saw pod success May 17 14:09:09.737: INFO: Pod "pod-secrets-00301c64-61dd-4857-80fc-5d4e41dead83" satisfied condition "success or failure" May 17 14:09:09.740: INFO: Trying to get logs from node iruya-worker pod pod-secrets-00301c64-61dd-4857-80fc-5d4e41dead83 container secret-volume-test: STEP: delete the pod May 17 14:09:09.777: INFO: Waiting for pod pod-secrets-00301c64-61dd-4857-80fc-5d4e41dead83 to disappear May 17 14:09:09.794: INFO: Pod pod-secrets-00301c64-61dd-4857-80fc-5d4e41dead83 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:09:09.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-1644" for this suite. May 17 14:09:15.849: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:09:15.919: INFO: namespace secrets-1644 deletion completed in 6.102345081s • [SLOW TEST:10.269 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:09:15.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 14:09:16.006: INFO: Waiting up to 5m0s for pod "downwardapi-volume-af383d79-3856-46bd-8eb9-64fad6838655" in namespace "downward-api-1391" to be "success or failure" May 17 14:09:16.011: INFO: Pod "downwardapi-volume-af383d79-3856-46bd-8eb9-64fad6838655": Phase="Pending", Reason="", readiness=false. Elapsed: 4.876685ms May 17 14:09:18.014: INFO: Pod "downwardapi-volume-af383d79-3856-46bd-8eb9-64fad6838655": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008301936s May 17 14:09:20.018: INFO: Pod "downwardapi-volume-af383d79-3856-46bd-8eb9-64fad6838655": Phase="Running", Reason="", readiness=true. Elapsed: 4.012073941s May 17 14:09:22.022: INFO: Pod "downwardapi-volume-af383d79-3856-46bd-8eb9-64fad6838655": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.015955952s STEP: Saw pod success May 17 14:09:22.022: INFO: Pod "downwardapi-volume-af383d79-3856-46bd-8eb9-64fad6838655" satisfied condition "success or failure" May 17 14:09:22.024: INFO: Trying to get logs from node iruya-worker2 pod downwardapi-volume-af383d79-3856-46bd-8eb9-64fad6838655 container client-container: STEP: delete the pod May 17 14:09:22.060: INFO: Waiting for pod downwardapi-volume-af383d79-3856-46bd-8eb9-64fad6838655 to disappear May 17 14:09:22.090: INFO: Pod downwardapi-volume-af383d79-3856-46bd-8eb9-64fad6838655 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:09:22.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1391" for this suite. May 17 14:09:28.109: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:09:28.186: INFO: namespace downward-api-1391 deletion completed in 6.092455288s • [SLOW TEST:12.267 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:09:28.187: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 14:09:28.261: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 17 14:09:33.266: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 17 14:09:33.266: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 17 14:09:33.292: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-5021,SelfLink:/apis/apps/v1/namespaces/deployment-5021/deployments/test-cleanup-deployment,UID:b07cae5a-d8e9-4843-b8b9-4b691a2f7a54,ResourceVersion:11407323,Generation:1,CreationTimestamp:2020-05-17 14:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},} May 17 14:09:33.314: INFO: New ReplicaSet "test-cleanup-deployment-55bbcbc84c" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c,GenerateName:,Namespace:deployment-5021,SelfLink:/apis/apps/v1/namespaces/deployment-5021/replicasets/test-cleanup-deployment-55bbcbc84c,UID:f2c5eda4-528e-4a01-a6a2-e9e1ea463434,ResourceVersion:11407325,Generation:1,CreationTimestamp:2020-05-17 14:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment b07cae5a-d8e9-4843-b8b9-4b691a2f7a54 0xc0019caf07 0xc0019caf08}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 17 14:09:33.314: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 17 14:09:33.314: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-5021,SelfLink:/apis/apps/v1/namespaces/deployment-5021/replicasets/test-cleanup-controller,UID:6dc31692-490d-42dc-9159-6426e0f377a7,ResourceVersion:11407324,Generation:1,CreationTimestamp:2020-05-17 14:09:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment b07cae5a-d8e9-4843-b8b9-4b691a2f7a54 0xc0019cae37 0xc0019cae38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 17 14:09:33.383: INFO: Pod "test-cleanup-controller-978pj" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-978pj,GenerateName:test-cleanup-controller-,Namespace:deployment-5021,SelfLink:/api/v1/namespaces/deployment-5021/pods/test-cleanup-controller-978pj,UID:f034dea8-f648-450f-9446-2fb9f81395f4,ResourceVersion:11407318,Generation:0,CreationTimestamp:2020-05-17 14:09:28 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 6dc31692-490d-42dc-9159-6426e0f377a7 0xc0019cb7f7 0xc0019cb7f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gngnw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gngnw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-gngnw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019cb870} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019cb890}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:09:28 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:09:31 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:09:31 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:09:28 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.18,StartTime:2020-05-17 14:09:28 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-05-17 14:09:30 +0000 UTC,} nil} {nil nil nil} true 0 docker.io/library/nginx:1.14-alpine docker.io/library/nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 containerd://104ad04c87bfab2c9b3b271c5ed8971fa51a5332e413bfa84116d109e158f765}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} May 17 14:09:33.383: INFO: Pod "test-cleanup-deployment-55bbcbc84c-jgfxf" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment-55bbcbc84c-jgfxf,GenerateName:test-cleanup-deployment-55bbcbc84c-,Namespace:deployment-5021,SelfLink:/api/v1/namespaces/deployment-5021/pods/test-cleanup-deployment-55bbcbc84c-jgfxf,UID:ee7b8f46-bbe6-4189-a807-e82dfe316169,ResourceVersion:11407331,Generation:0,CreationTimestamp:2020-05-17 14:09:33 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod-template-hash: 55bbcbc84c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-deployment-55bbcbc84c f2c5eda4-528e-4a01-a6a2-e9e1ea463434 0xc0019cb977 0xc0019cb978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-gngnw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-gngnw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-gngnw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc0019cb9f0} {node.kubernetes.io/unreachable Exists NoExecute 0xc0019cba10}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:09:33 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:09:33.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5021" for this suite. May 17 14:09:39.445: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:09:39.511: INFO: namespace deployment-5021 deletion completed in 6.099862215s • [SLOW TEST:11.324 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:09:39.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the initial replication controller May 17 14:09:39.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4538' May 17 14:09:42.443: INFO: stderr: "" May 17 14:09:42.444: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 17 14:09:42.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4538' May 17 14:09:42.548: INFO: stderr: "" May 17 14:09:42.548: INFO: stdout: "update-demo-nautilus-4hx6t update-demo-nautilus-bn84f " May 17 14:09:42.548: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hx6t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4538' May 17 14:09:42.642: INFO: stderr: "" May 17 14:09:42.642: INFO: stdout: "" May 17 14:09:42.642: INFO: update-demo-nautilus-4hx6t is created but not running May 17 14:09:47.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4538' May 17 14:09:47.739: INFO: stderr: "" May 17 14:09:47.740: INFO: stdout: "update-demo-nautilus-4hx6t update-demo-nautilus-bn84f " May 17 14:09:47.740: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hx6t -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4538' May 17 14:09:47.839: INFO: stderr: "" May 17 14:09:47.839: INFO: stdout: "true" May 17 14:09:47.839: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4hx6t -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4538' May 17 14:09:47.933: INFO: stderr: "" May 17 14:09:47.933: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 17 14:09:47.933: INFO: validating pod update-demo-nautilus-4hx6t May 17 14:09:47.938: INFO: got data: { "image": "nautilus.jpg" } May 17 14:09:47.938: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 17 14:09:47.938: INFO: update-demo-nautilus-4hx6t is verified up and running May 17 14:09:47.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bn84f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4538' May 17 14:09:48.034: INFO: stderr: "" May 17 14:09:48.034: INFO: stdout: "true" May 17 14:09:48.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-bn84f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4538' May 17 14:09:48.132: INFO: stderr: "" May 17 14:09:48.132: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 17 14:09:48.132: INFO: validating pod update-demo-nautilus-bn84f May 17 14:09:48.136: INFO: got data: { "image": "nautilus.jpg" } May 17 14:09:48.136: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 17 14:09:48.136: INFO: update-demo-nautilus-bn84f is verified up and running STEP: rolling-update to new replication controller May 17 14:09:48.138: INFO: scanned /root for discovery docs: May 17 14:09:48.138: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4538' May 17 14:10:10.740: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 17 14:10:10.741: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 17 14:10:10.741: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4538' May 17 14:10:10.845: INFO: stderr: "" May 17 14:10:10.845: INFO: stdout: "update-demo-kitten-2gm98 update-demo-kitten-2tb4b " May 17 14:10:10.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2gm98 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4538' May 17 14:10:10.942: INFO: stderr: "" May 17 14:10:10.942: INFO: stdout: "true" May 17 14:10:10.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2gm98 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4538' May 17 14:10:11.043: INFO: stderr: "" May 17 14:10:11.043: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 17 14:10:11.043: INFO: validating pod update-demo-kitten-2gm98 May 17 14:10:11.054: INFO: got data: { "image": "kitten.jpg" } May 17 14:10:11.054: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 17 14:10:11.054: INFO: update-demo-kitten-2gm98 is verified up and running May 17 14:10:11.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2tb4b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4538' May 17 14:10:11.162: INFO: stderr: "" May 17 14:10:11.162: INFO: stdout: "true" May 17 14:10:11.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-2tb4b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4538' May 17 14:10:11.276: INFO: stderr: "" May 17 14:10:11.276: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 17 14:10:11.276: INFO: validating pod update-demo-kitten-2tb4b May 17 14:10:11.280: INFO: got data: { "image": "kitten.jpg" } May 17 14:10:11.280: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 17 14:10:11.280: INFO: update-demo-kitten-2tb4b is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:10:11.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4538" for this suite. May 17 14:10:35.296: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:10:35.371: INFO: namespace kubectl-4538 deletion completed in 24.088085889s • [SLOW TEST:55.861 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:10:35.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 17 14:10:35.444: INFO: Waiting up to 5m0s for pod "pod-0a43b8ff-8e80-44ab-8f74-ea0a71f4055e" in namespace "emptydir-7494" to be "success or failure" May 17 14:10:35.448: INFO: Pod "pod-0a43b8ff-8e80-44ab-8f74-ea0a71f4055e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161245ms May 17 14:10:37.452: INFO: Pod "pod-0a43b8ff-8e80-44ab-8f74-ea0a71f4055e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007969856s May 17 14:10:39.456: INFO: Pod "pod-0a43b8ff-8e80-44ab-8f74-ea0a71f4055e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011830165s STEP: Saw pod success May 17 14:10:39.456: INFO: Pod "pod-0a43b8ff-8e80-44ab-8f74-ea0a71f4055e" satisfied condition "success or failure" May 17 14:10:39.459: INFO: Trying to get logs from node iruya-worker pod pod-0a43b8ff-8e80-44ab-8f74-ea0a71f4055e container test-container: STEP: delete the pod May 17 14:10:39.479: INFO: Waiting for pod pod-0a43b8ff-8e80-44ab-8f74-ea0a71f4055e to disappear May 17 14:10:39.498: INFO: Pod pod-0a43b8ff-8e80-44ab-8f74-ea0a71f4055e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:10:39.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7494" for this suite. May 17 14:10:45.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:10:45.604: INFO: namespace emptydir-7494 deletion completed in 6.10247422s • [SLOW TEST:10.233 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:10:45.605: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on node default medium May 17 14:10:45.673: INFO: Waiting up to 5m0s for pod "pod-4317eb9c-3cd1-4ff6-bec4-41da2ac6448c" in namespace "emptydir-6174" to be "success or failure" May 17 14:10:45.676: INFO: Pod "pod-4317eb9c-3cd1-4ff6-bec4-41da2ac6448c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.385377ms May 17 14:10:47.681: INFO: Pod "pod-4317eb9c-3cd1-4ff6-bec4-41da2ac6448c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007716611s May 17 14:10:49.685: INFO: Pod "pod-4317eb9c-3cd1-4ff6-bec4-41da2ac6448c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012212961s STEP: Saw pod success May 17 14:10:49.685: INFO: Pod "pod-4317eb9c-3cd1-4ff6-bec4-41da2ac6448c" satisfied condition "success or failure" May 17 14:10:49.688: INFO: Trying to get logs from node iruya-worker2 pod pod-4317eb9c-3cd1-4ff6-bec4-41da2ac6448c container test-container: STEP: delete the pod May 17 14:10:49.708: INFO: Waiting for pod pod-4317eb9c-3cd1-4ff6-bec4-41da2ac6448c to disappear May 17 14:10:49.732: INFO: Pod pod-4317eb9c-3cd1-4ff6-bec4-41da2ac6448c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:10:49.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6174" for this suite. May 17 14:10:55.758: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:10:55.825: INFO: namespace emptydir-6174 deletion completed in 6.08928039s • [SLOW TEST:10.220 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:10:55.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:11:00.008: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2218" for this suite. May 17 14:11:46.027: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:11:46.106: INFO: namespace kubelet-test-2218 deletion completed in 46.094682616s • [SLOW TEST:50.281 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:11:46.106: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 14:11:46.159: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6383' May 17 14:11:46.456: INFO: stderr: "" May 17 14:11:46.456: INFO: stdout: "replicationcontroller/redis-master created\n" May 17 14:11:46.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6383' May 17 14:11:46.804: INFO: stderr: "" May 17 14:11:46.804: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. May 17 14:11:47.810: INFO: Selector matched 1 pods for map[app:redis] May 17 14:11:47.810: INFO: Found 0 / 1 May 17 14:11:48.809: INFO: Selector matched 1 pods for map[app:redis] May 17 14:11:48.809: INFO: Found 0 / 1 May 17 14:11:49.809: INFO: Selector matched 1 pods for map[app:redis] May 17 14:11:49.809: INFO: Found 0 / 1 May 17 14:11:50.809: INFO: Selector matched 1 pods for map[app:redis] May 17 14:11:50.810: INFO: Found 1 / 1 May 17 14:11:50.810: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 17 14:11:50.813: INFO: Selector matched 1 pods for map[app:redis] May 17 14:11:50.813: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 17 14:11:50.813: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-wsg6r --namespace=kubectl-6383' May 17 14:11:50.927: INFO: stderr: "" May 17 14:11:50.927: INFO: stdout: "Name: redis-master-wsg6r\nNamespace: kubectl-6383\nPriority: 0\nNode: iruya-worker2/172.17.0.5\nStart Time: Sun, 17 May 2020 14:11:46 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.169\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: containerd://d1a48a849688f528528964753697f8fdf0105ef6c633e73b087c338f077ea0bc\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Sun, 17 May 2020 14:11:49 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-pnl26 (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-pnl26:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-pnl26\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 4s default-scheduler Successfully assigned kubectl-6383/redis-master-wsg6r to iruya-worker2\n Normal Pulled 3s kubelet, iruya-worker2 Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 1s kubelet, iruya-worker2 Created container redis-master\n Normal Started 1s kubelet, iruya-worker2 Started container redis-master\n" May 17 14:11:50.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-6383' May 17 14:11:51.056: INFO: stderr: "" May 17 14:11:51.056: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6383\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 5s replication-controller Created pod: redis-master-wsg6r\n" May 17 14:11:51.056: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-6383' May 17 14:11:51.158: INFO: stderr: "" May 17 14:11:51.158: INFO: stdout: "Name: redis-master\nNamespace: kubectl-6383\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.98.220.92\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.244.1.169:6379\nSession Affinity: None\nEvents: \n" May 17 14:11:51.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-control-plane' May 17 14:11:51.296: INFO: stderr: "" May 17 14:11:51.296: INFO: stdout: "Name: iruya-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:24:20 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Sun, 17 May 2020 14:11:44 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Sun, 17 May 2020 14:11:44 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Sun, 17 May 2020 14:11:44 +0000 Sun, 15 Mar 2020 18:24:20 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Sun, 17 May 2020 14:11:44 +0000 Sun, 15 Mar 2020 18:25:00 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.7\n Hostname: iruya-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 09f14f6f4d1640fcaab2243401c9f154\n System UUID: 7c6ca533-492e-400c-b058-c282f97a69ec\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2\n Kubelet Version: v1.15.7\n Kube-Proxy Version: v1.15.7\nPodCIDR: 10.244.0.0/24\nNon-terminated Pods: (7 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system etcd-iruya-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62d\n kube-system kindnet-zn8sx 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 62d\n kube-system kube-apiserver-iruya-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 62d\n kube-system kube-controller-manager-iruya-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 62d\n kube-system kube-proxy-46nsr 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62d\n kube-system kube-scheduler-iruya-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 62d\n local-path-storage local-path-provisioner-d4947b89c-72frh 0 (0%) 0 (0%) 0 (0%) 0 (0%) 62d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 650m (4%) 100m (0%)\n memory 50Mi (0%) 50Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 17 14:11:51.296: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-6383' May 17 14:11:51.409: INFO: stderr: "" May 17 14:11:51.409: INFO: stdout: "Name: kubectl-6383\nLabels: e2e-framework=kubectl\n e2e-run=82956068-7451-4359-83b5-1c8de8d8a513\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:11:51.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6383" for this suite. May 17 14:12:13.436: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:12:13.511: INFO: namespace kubectl-6383 deletion completed in 22.098277732s • [SLOW TEST:27.405 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:12:13.512: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on tmpfs May 17 14:12:13.574: INFO: Waiting up to 5m0s for pod "pod-2b7fc935-ad9d-4368-aabd-0e9e1cbaa6cc" in namespace "emptydir-3431" to be "success or failure" May 17 14:12:13.598: INFO: Pod "pod-2b7fc935-ad9d-4368-aabd-0e9e1cbaa6cc": Phase="Pending", Reason="", readiness=false. Elapsed: 23.670753ms May 17 14:12:15.602: INFO: Pod "pod-2b7fc935-ad9d-4368-aabd-0e9e1cbaa6cc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027769404s May 17 14:12:17.611: INFO: Pod "pod-2b7fc935-ad9d-4368-aabd-0e9e1cbaa6cc": Phase="Running", Reason="", readiness=true. Elapsed: 4.036228482s May 17 14:12:19.615: INFO: Pod "pod-2b7fc935-ad9d-4368-aabd-0e9e1cbaa6cc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.040445142s STEP: Saw pod success May 17 14:12:19.615: INFO: Pod "pod-2b7fc935-ad9d-4368-aabd-0e9e1cbaa6cc" satisfied condition "success or failure" May 17 14:12:19.618: INFO: Trying to get logs from node iruya-worker pod pod-2b7fc935-ad9d-4368-aabd-0e9e1cbaa6cc container test-container: STEP: delete the pod May 17 14:12:19.637: INFO: Waiting for pod pod-2b7fc935-ad9d-4368-aabd-0e9e1cbaa6cc to disappear May 17 14:12:19.654: INFO: Pod pod-2b7fc935-ad9d-4368-aabd-0e9e1cbaa6cc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:12:19.654: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3431" for this suite. May 17 14:12:25.669: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:12:25.746: INFO: namespace emptydir-3431 deletion completed in 6.087970061s • [SLOW TEST:12.234 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:12:25.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0777 on tmpfs May 17 14:12:25.856: INFO: Waiting up to 5m0s for pod "pod-38831830-c73a-4f57-835d-f725b12c0988" in namespace "emptydir-7739" to be "success or failure" May 17 14:12:25.863: INFO: Pod "pod-38831830-c73a-4f57-835d-f725b12c0988": Phase="Pending", Reason="", readiness=false. Elapsed: 7.242879ms May 17 14:12:27.974: INFO: Pod "pod-38831830-c73a-4f57-835d-f725b12c0988": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118321484s May 17 14:12:29.998: INFO: Pod "pod-38831830-c73a-4f57-835d-f725b12c0988": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.141980011s STEP: Saw pod success May 17 14:12:29.998: INFO: Pod "pod-38831830-c73a-4f57-835d-f725b12c0988" satisfied condition "success or failure" May 17 14:12:30.000: INFO: Trying to get logs from node iruya-worker2 pod pod-38831830-c73a-4f57-835d-f725b12c0988 container test-container: STEP: delete the pod May 17 14:12:30.033: INFO: Waiting for pod pod-38831830-c73a-4f57-835d-f725b12c0988 to disappear May 17 14:12:30.037: INFO: Pod pod-38831830-c73a-4f57-835d-f725b12c0988 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:12:30.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7739" for this suite. May 17 14:12:36.053: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:12:36.129: INFO: namespace emptydir-7739 deletion completed in 6.089255907s • [SLOW TEST:10.382 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:12:36.129: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-mt8s STEP: Creating a pod to test atomic-volume-subpath May 17 14:12:36.252: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-mt8s" in namespace "subpath-471" to be "success or failure" May 17 14:12:36.265: INFO: Pod "pod-subpath-test-downwardapi-mt8s": Phase="Pending", Reason="", readiness=false. Elapsed: 13.499154ms May 17 14:12:38.269: INFO: Pod "pod-subpath-test-downwardapi-mt8s": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017131223s May 17 14:12:40.273: INFO: Pod "pod-subpath-test-downwardapi-mt8s": Phase="Running", Reason="", readiness=true. Elapsed: 4.021248898s May 17 14:12:42.277: INFO: Pod "pod-subpath-test-downwardapi-mt8s": Phase="Running", Reason="", readiness=true. Elapsed: 6.025662112s May 17 14:12:44.282: INFO: Pod "pod-subpath-test-downwardapi-mt8s": Phase="Running", Reason="", readiness=true. Elapsed: 8.029837976s May 17 14:12:46.286: INFO: Pod "pod-subpath-test-downwardapi-mt8s": Phase="Running", Reason="", readiness=true. Elapsed: 10.034261598s May 17 14:12:48.290: INFO: Pod "pod-subpath-test-downwardapi-mt8s": Phase="Running", Reason="", readiness=true. Elapsed: 12.038678062s May 17 14:12:50.295: INFO: Pod "pod-subpath-test-downwardapi-mt8s": Phase="Running", Reason="", readiness=true. Elapsed: 14.043369221s May 17 14:12:52.300: INFO: Pod "pod-subpath-test-downwardapi-mt8s": Phase="Running", Reason="", readiness=true. Elapsed: 16.048197918s May 17 14:12:54.304: INFO: Pod "pod-subpath-test-downwardapi-mt8s": Phase="Running", Reason="", readiness=true. Elapsed: 18.052719413s May 17 14:12:56.309: INFO: Pod "pod-subpath-test-downwardapi-mt8s": Phase="Running", Reason="", readiness=true. Elapsed: 20.05753798s May 17 14:12:58.314: INFO: Pod "pod-subpath-test-downwardapi-mt8s": Phase="Running", Reason="", readiness=true. Elapsed: 22.06205965s May 17 14:13:00.318: INFO: Pod "pod-subpath-test-downwardapi-mt8s": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.066727191s STEP: Saw pod success May 17 14:13:00.318: INFO: Pod "pod-subpath-test-downwardapi-mt8s" satisfied condition "success or failure" May 17 14:13:00.322: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-downwardapi-mt8s container test-container-subpath-downwardapi-mt8s: STEP: delete the pod May 17 14:13:00.386: INFO: Waiting for pod pod-subpath-test-downwardapi-mt8s to disappear May 17 14:13:00.483: INFO: Pod pod-subpath-test-downwardapi-mt8s no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-mt8s May 17 14:13:00.483: INFO: Deleting pod "pod-subpath-test-downwardapi-mt8s" in namespace "subpath-471" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:13:00.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-471" for this suite. May 17 14:13:06.533: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:13:06.616: INFO: namespace subpath-471 deletion completed in 6.126961355s • [SLOW TEST:30.487 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:13:06.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 17 14:13:10.742: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:13:10.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3827" for this suite. May 17 14:13:16.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:13:16.868: INFO: namespace container-runtime-3827 deletion completed in 6.092825414s • [SLOW TEST:10.251 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:13:16.868: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:13:16.942: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9009" for this suite. May 17 14:13:22.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:13:23.044: INFO: namespace services-9009 deletion completed in 6.09859016s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:6.176 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:13:23.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-6xvm STEP: Creating a pod to test atomic-volume-subpath May 17 14:13:23.143: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-6xvm" in namespace "subpath-1795" to be "success or failure" May 17 14:13:23.146: INFO: Pod "pod-subpath-test-configmap-6xvm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.711568ms May 17 14:13:25.149: INFO: Pod "pod-subpath-test-configmap-6xvm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00642528s May 17 14:13:27.154: INFO: Pod "pod-subpath-test-configmap-6xvm": Phase="Running", Reason="", readiness=true. Elapsed: 4.010868062s May 17 14:13:29.158: INFO: Pod "pod-subpath-test-configmap-6xvm": Phase="Running", Reason="", readiness=true. Elapsed: 6.015084618s May 17 14:13:31.163: INFO: Pod "pod-subpath-test-configmap-6xvm": Phase="Running", Reason="", readiness=true. Elapsed: 8.019621346s May 17 14:13:33.167: INFO: Pod "pod-subpath-test-configmap-6xvm": Phase="Running", Reason="", readiness=true. Elapsed: 10.024162929s May 17 14:13:35.171: INFO: Pod "pod-subpath-test-configmap-6xvm": Phase="Running", Reason="", readiness=true. Elapsed: 12.027951555s May 17 14:13:37.175: INFO: Pod "pod-subpath-test-configmap-6xvm": Phase="Running", Reason="", readiness=true. Elapsed: 14.032400558s May 17 14:13:39.180: INFO: Pod "pod-subpath-test-configmap-6xvm": Phase="Running", Reason="", readiness=true. Elapsed: 16.036981988s May 17 14:13:41.185: INFO: Pod "pod-subpath-test-configmap-6xvm": Phase="Running", Reason="", readiness=true. Elapsed: 18.041598245s May 17 14:13:43.190: INFO: Pod "pod-subpath-test-configmap-6xvm": Phase="Running", Reason="", readiness=true. Elapsed: 20.046678159s May 17 14:13:45.193: INFO: Pod "pod-subpath-test-configmap-6xvm": Phase="Running", Reason="", readiness=true. Elapsed: 22.050574404s May 17 14:13:47.198: INFO: Pod "pod-subpath-test-configmap-6xvm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.054775985s STEP: Saw pod success May 17 14:13:47.198: INFO: Pod "pod-subpath-test-configmap-6xvm" satisfied condition "success or failure" May 17 14:13:47.201: INFO: Trying to get logs from node iruya-worker pod pod-subpath-test-configmap-6xvm container test-container-subpath-configmap-6xvm: STEP: delete the pod May 17 14:13:47.264: INFO: Waiting for pod pod-subpath-test-configmap-6xvm to disappear May 17 14:13:47.267: INFO: Pod pod-subpath-test-configmap-6xvm no longer exists STEP: Deleting pod pod-subpath-test-configmap-6xvm May 17 14:13:47.267: INFO: Deleting pod "pod-subpath-test-configmap-6xvm" in namespace "subpath-1795" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:13:47.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-1795" for this suite. May 17 14:13:53.283: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:13:53.367: INFO: namespace subpath-1795 deletion completed in 6.094333401s • [SLOW TEST:30.323 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:13:53.368: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward api env vars May 17 14:13:53.456: INFO: Waiting up to 5m0s for pod "downward-api-b953e0ca-2716-4063-aa20-e43f9f605be8" in namespace "downward-api-4383" to be "success or failure" May 17 14:13:53.463: INFO: Pod "downward-api-b953e0ca-2716-4063-aa20-e43f9f605be8": Phase="Pending", Reason="", readiness=false. Elapsed: 7.463176ms May 17 14:13:55.467: INFO: Pod "downward-api-b953e0ca-2716-4063-aa20-e43f9f605be8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011484087s May 17 14:13:57.471: INFO: Pod "downward-api-b953e0ca-2716-4063-aa20-e43f9f605be8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014967617s STEP: Saw pod success May 17 14:13:57.471: INFO: Pod "downward-api-b953e0ca-2716-4063-aa20-e43f9f605be8" satisfied condition "success or failure" May 17 14:13:57.473: INFO: Trying to get logs from node iruya-worker2 pod downward-api-b953e0ca-2716-4063-aa20-e43f9f605be8 container dapi-container: STEP: delete the pod May 17 14:13:57.527: INFO: Waiting for pod downward-api-b953e0ca-2716-4063-aa20-e43f9f605be8 to disappear May 17 14:13:57.530: INFO: Pod downward-api-b953e0ca-2716-4063-aa20-e43f9f605be8 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:13:57.530: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4383" for this suite. May 17 14:14:03.546: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:14:03.630: INFO: namespace downward-api-4383 deletion completed in 6.096583792s • [SLOW TEST:10.262 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32 should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:14:03.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 14:14:03.689: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:14:07.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7117" for this suite. May 17 14:14:57.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:14:57.878: INFO: namespace pods-7117 deletion completed in 50.121972463s • [SLOW TEST:54.248 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:14:57.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 17 14:15:02.524: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a6b41812-452d-4637-865e-59b0187add05" May 17 14:15:02.524: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a6b41812-452d-4637-865e-59b0187add05" in namespace "pods-140" to be "terminated due to deadline exceeded" May 17 14:15:02.641: INFO: Pod "pod-update-activedeadlineseconds-a6b41812-452d-4637-865e-59b0187add05": Phase="Running", Reason="", readiness=true. Elapsed: 117.294435ms May 17 14:15:04.645: INFO: Pod "pod-update-activedeadlineseconds-a6b41812-452d-4637-865e-59b0187add05": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.121495507s May 17 14:15:04.645: INFO: Pod "pod-update-activedeadlineseconds-a6b41812-452d-4637-865e-59b0187add05" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:15:04.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-140" for this suite. May 17 14:15:10.692: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:15:10.768: INFO: namespace pods-140 deletion completed in 6.119437991s • [SLOW TEST:12.890 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:15:10.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-d6663d04-cb0f-460e-8562-cb71c7449c7f STEP: Creating a pod to test consume configMaps May 17 14:15:10.839: INFO: Waiting up to 5m0s for pod "pod-configmaps-f41f1ef0-c174-4b91-8a1f-56ccd7077842" in namespace "configmap-8608" to be "success or failure" May 17 14:15:10.843: INFO: Pod "pod-configmaps-f41f1ef0-c174-4b91-8a1f-56ccd7077842": Phase="Pending", Reason="", readiness=false. Elapsed: 3.486579ms May 17 14:15:12.847: INFO: Pod "pod-configmaps-f41f1ef0-c174-4b91-8a1f-56ccd7077842": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007687962s May 17 14:15:14.850: INFO: Pod "pod-configmaps-f41f1ef0-c174-4b91-8a1f-56ccd7077842": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010712893s STEP: Saw pod success May 17 14:15:14.850: INFO: Pod "pod-configmaps-f41f1ef0-c174-4b91-8a1f-56ccd7077842" satisfied condition "success or failure" May 17 14:15:14.852: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-f41f1ef0-c174-4b91-8a1f-56ccd7077842 container configmap-volume-test: STEP: delete the pod May 17 14:15:14.873: INFO: Waiting for pod pod-configmaps-f41f1ef0-c174-4b91-8a1f-56ccd7077842 to disappear May 17 14:15:14.878: INFO: Pod pod-configmaps-f41f1ef0-c174-4b91-8a1f-56ccd7077842 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:15:14.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8608" for this suite. May 17 14:15:20.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:15:20.967: INFO: namespace configmap-8608 deletion completed in 6.086664197s • [SLOW TEST:10.199 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:15:20.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod liveness-079ee804-0c87-435a-8a82-dcdedb0d32b2 in namespace container-probe-8172 May 17 14:15:25.092: INFO: Started pod liveness-079ee804-0c87-435a-8a82-dcdedb0d32b2 in namespace container-probe-8172 STEP: checking the pod's current state and verifying that restartCount is present May 17 14:15:25.094: INFO: Initial restart count of pod liveness-079ee804-0c87-435a-8a82-dcdedb0d32b2 is 0 May 17 14:15:45.154: INFO: Restart count of pod container-probe-8172/liveness-079ee804-0c87-435a-8a82-dcdedb0d32b2 is now 1 (20.059788146s elapsed) May 17 14:16:05.204: INFO: Restart count of pod container-probe-8172/liveness-079ee804-0c87-435a-8a82-dcdedb0d32b2 is now 2 (40.109700259s elapsed) May 17 14:16:25.246: INFO: Restart count of pod container-probe-8172/liveness-079ee804-0c87-435a-8a82-dcdedb0d32b2 is now 3 (1m0.151495143s elapsed) May 17 14:16:45.305: INFO: Restart count of pod container-probe-8172/liveness-079ee804-0c87-435a-8a82-dcdedb0d32b2 is now 4 (1m20.210908913s elapsed) May 17 14:17:49.438: INFO: Restart count of pod container-probe-8172/liveness-079ee804-0c87-435a-8a82-dcdedb0d32b2 is now 5 (2m24.34328057s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:17:49.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-8172" for this suite. May 17 14:17:55.471: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:17:55.535: INFO: namespace container-probe-8172 deletion completed in 6.079895595s • [SLOW TEST:154.567 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:17:55.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller May 17 14:17:55.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6537' May 17 14:17:55.893: INFO: stderr: "" May 17 14:17:55.894: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 17 14:17:55.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6537' May 17 14:17:56.005: INFO: stderr: "" May 17 14:17:56.005: INFO: stdout: "update-demo-nautilus-f8ssf update-demo-nautilus-v66q8 " May 17 14:17:56.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8ssf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6537' May 17 14:17:56.101: INFO: stderr: "" May 17 14:17:56.101: INFO: stdout: "" May 17 14:17:56.101: INFO: update-demo-nautilus-f8ssf is created but not running May 17 14:18:01.102: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6537' May 17 14:18:01.223: INFO: stderr: "" May 17 14:18:01.223: INFO: stdout: "update-demo-nautilus-f8ssf update-demo-nautilus-v66q8 " May 17 14:18:01.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8ssf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6537' May 17 14:18:01.315: INFO: stderr: "" May 17 14:18:01.315: INFO: stdout: "true" May 17 14:18:01.315: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-f8ssf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6537' May 17 14:18:01.406: INFO: stderr: "" May 17 14:18:01.406: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 17 14:18:01.406: INFO: validating pod update-demo-nautilus-f8ssf May 17 14:18:01.411: INFO: got data: { "image": "nautilus.jpg" } May 17 14:18:01.411: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 17 14:18:01.411: INFO: update-demo-nautilus-f8ssf is verified up and running May 17 14:18:01.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v66q8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6537' May 17 14:18:01.509: INFO: stderr: "" May 17 14:18:01.509: INFO: stdout: "true" May 17 14:18:01.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v66q8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6537' May 17 14:18:01.602: INFO: stderr: "" May 17 14:18:01.602: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 17 14:18:01.602: INFO: validating pod update-demo-nautilus-v66q8 May 17 14:18:01.605: INFO: got data: { "image": "nautilus.jpg" } May 17 14:18:01.605: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 17 14:18:01.606: INFO: update-demo-nautilus-v66q8 is verified up and running STEP: scaling down the replication controller May 17 14:18:01.608: INFO: scanned /root for discovery docs: May 17 14:18:01.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6537' May 17 14:18:02.722: INFO: stderr: "" May 17 14:18:02.722: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 17 14:18:02.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6537' May 17 14:18:02.824: INFO: stderr: "" May 17 14:18:02.824: INFO: stdout: "update-demo-nautilus-f8ssf update-demo-nautilus-v66q8 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 17 14:18:07.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6537' May 17 14:18:07.934: INFO: stderr: "" May 17 14:18:07.934: INFO: stdout: "update-demo-nautilus-f8ssf update-demo-nautilus-v66q8 " STEP: Replicas for name=update-demo: expected=1 actual=2 May 17 14:18:12.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6537' May 17 14:18:13.027: INFO: stderr: "" May 17 14:18:13.027: INFO: stdout: "update-demo-nautilus-v66q8 " May 17 14:18:13.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v66q8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6537' May 17 14:18:13.131: INFO: stderr: "" May 17 14:18:13.131: INFO: stdout: "true" May 17 14:18:13.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v66q8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6537' May 17 14:18:13.221: INFO: stderr: "" May 17 14:18:13.221: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 17 14:18:13.221: INFO: validating pod update-demo-nautilus-v66q8 May 17 14:18:13.224: INFO: got data: { "image": "nautilus.jpg" } May 17 14:18:13.224: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 17 14:18:13.224: INFO: update-demo-nautilus-v66q8 is verified up and running STEP: scaling up the replication controller May 17 14:18:13.225: INFO: scanned /root for discovery docs: May 17 14:18:13.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6537' May 17 14:18:14.352: INFO: stderr: "" May 17 14:18:14.352: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 17 14:18:14.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6537' May 17 14:18:14.447: INFO: stderr: "" May 17 14:18:14.447: INFO: stdout: "update-demo-nautilus-lc6x2 update-demo-nautilus-v66q8 " May 17 14:18:14.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lc6x2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6537' May 17 14:18:14.536: INFO: stderr: "" May 17 14:18:14.536: INFO: stdout: "" May 17 14:18:14.536: INFO: update-demo-nautilus-lc6x2 is created but not running May 17 14:18:19.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6537' May 17 14:18:19.647: INFO: stderr: "" May 17 14:18:19.647: INFO: stdout: "update-demo-nautilus-lc6x2 update-demo-nautilus-v66q8 " May 17 14:18:19.647: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lc6x2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6537' May 17 14:18:19.748: INFO: stderr: "" May 17 14:18:19.748: INFO: stdout: "true" May 17 14:18:19.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lc6x2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6537' May 17 14:18:19.851: INFO: stderr: "" May 17 14:18:19.852: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 17 14:18:19.852: INFO: validating pod update-demo-nautilus-lc6x2 May 17 14:18:19.856: INFO: got data: { "image": "nautilus.jpg" } May 17 14:18:19.856: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 17 14:18:19.856: INFO: update-demo-nautilus-lc6x2 is verified up and running May 17 14:18:19.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v66q8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6537' May 17 14:18:19.947: INFO: stderr: "" May 17 14:18:19.947: INFO: stdout: "true" May 17 14:18:19.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-v66q8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6537' May 17 14:18:20.044: INFO: stderr: "" May 17 14:18:20.044: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 17 14:18:20.044: INFO: validating pod update-demo-nautilus-v66q8 May 17 14:18:20.048: INFO: got data: { "image": "nautilus.jpg" } May 17 14:18:20.048: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 17 14:18:20.048: INFO: update-demo-nautilus-v66q8 is verified up and running STEP: using delete to clean up resources May 17 14:18:20.048: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6537' May 17 14:18:20.157: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 14:18:20.157: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 17 14:18:20.157: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6537' May 17 14:18:20.263: INFO: stderr: "No resources found.\n" May 17 14:18:20.263: INFO: stdout: "" May 17 14:18:20.263: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6537 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 17 14:18:20.401: INFO: stderr: "" May 17 14:18:20.401: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:18:20.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6537" for this suite. May 17 14:18:42.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:18:42.496: INFO: namespace kubectl-6537 deletion completed in 22.088422533s • [SLOW TEST:46.961 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:18:42.497: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-731eb71a-6fb2-4d97-a74c-e6250547e668 STEP: Creating a pod to test consume configMaps May 17 14:18:42.581: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a6f1e14-5e9f-4ce8-b7cc-6ebca4e84156" in namespace "configmap-8436" to be "success or failure" May 17 14:18:42.584: INFO: Pod "pod-configmaps-0a6f1e14-5e9f-4ce8-b7cc-6ebca4e84156": Phase="Pending", Reason="", readiness=false. Elapsed: 3.47557ms May 17 14:18:44.588: INFO: Pod "pod-configmaps-0a6f1e14-5e9f-4ce8-b7cc-6ebca4e84156": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007848601s May 17 14:18:46.592: INFO: Pod "pod-configmaps-0a6f1e14-5e9f-4ce8-b7cc-6ebca4e84156": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011371649s STEP: Saw pod success May 17 14:18:46.592: INFO: Pod "pod-configmaps-0a6f1e14-5e9f-4ce8-b7cc-6ebca4e84156" satisfied condition "success or failure" May 17 14:18:46.594: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-0a6f1e14-5e9f-4ce8-b7cc-6ebca4e84156 container configmap-volume-test: STEP: delete the pod May 17 14:18:46.639: INFO: Waiting for pod pod-configmaps-0a6f1e14-5e9f-4ce8-b7cc-6ebca4e84156 to disappear May 17 14:18:46.650: INFO: Pod pod-configmaps-0a6f1e14-5e9f-4ce8-b7cc-6ebca4e84156 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:18:46.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8436" for this suite. May 17 14:18:52.665: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:18:52.748: INFO: namespace configmap-8436 deletion completed in 6.09456549s • [SLOW TEST:10.251 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:18:52.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-8561/secret-test-9930945a-2dec-4941-b268-9aa10819a1c0 STEP: Creating a pod to test consume secrets May 17 14:18:52.820: INFO: Waiting up to 5m0s for pod "pod-configmaps-6d34be6a-7ff8-4f1a-b7aa-b9cecc72767a" in namespace "secrets-8561" to be "success or failure" May 17 14:18:52.860: INFO: Pod "pod-configmaps-6d34be6a-7ff8-4f1a-b7aa-b9cecc72767a": Phase="Pending", Reason="", readiness=false. Elapsed: 39.747135ms May 17 14:18:54.863: INFO: Pod "pod-configmaps-6d34be6a-7ff8-4f1a-b7aa-b9cecc72767a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.042766342s May 17 14:18:56.867: INFO: Pod "pod-configmaps-6d34be6a-7ff8-4f1a-b7aa-b9cecc72767a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.047169145s STEP: Saw pod success May 17 14:18:56.867: INFO: Pod "pod-configmaps-6d34be6a-7ff8-4f1a-b7aa-b9cecc72767a" satisfied condition "success or failure" May 17 14:18:56.870: INFO: Trying to get logs from node iruya-worker2 pod pod-configmaps-6d34be6a-7ff8-4f1a-b7aa-b9cecc72767a container env-test: STEP: delete the pod May 17 14:18:56.886: INFO: Waiting for pod pod-configmaps-6d34be6a-7ff8-4f1a-b7aa-b9cecc72767a to disappear May 17 14:18:56.890: INFO: Pod pod-configmaps-6d34be6a-7ff8-4f1a-b7aa-b9cecc72767a no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:18:56.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8561" for this suite. May 17 14:19:02.906: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:19:02.988: INFO: namespace secrets-8561 deletion completed in 6.095195577s • [SLOW TEST:10.240 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:19:02.988: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod May 17 14:19:03.059: INFO: PodSpec: initContainers in spec.initContainers May 17 14:19:56.309: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-3c355127-0d24-404a-84f0-8037a9d2e4dd", GenerateName:"", Namespace:"init-container-4602", SelfLink:"/api/v1/namespaces/init-container-4602/pods/pod-init-3c355127-0d24-404a-84f0-8037a9d2e4dd", UID:"23d6ca52-8ae7-4897-8521-a5dc65e2b50b", ResourceVersion:"11409263", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63725321943, loc:(*time.Location)(0x7ead8c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"59321853"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-vsl2h", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002e8c000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vsl2h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vsl2h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-vsl2h", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc002c2a088), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc001858060), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c2a110)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc002c2a130)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc002c2a138), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc002c2a13c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725321943, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725321943, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725321943, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725321943, loc:(*time.Location)(0x7ead8c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.5", PodIP:"10.244.1.178", StartTime:(*v1.Time)(0xc0022e80c0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(0xc0022e8140), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc000a46070)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://261fea8c3ff677913858563b8ddc8ae3cc74dbc05c44d09d14f6aad6a8209a4e"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022e81a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022e8100), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:19:56.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4602" for this suite. May 17 14:20:18.685: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:20:18.762: INFO: namespace init-container-4602 deletion completed in 22.151289613s • [SLOW TEST:75.774 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:20:18.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: validating cluster-info May 17 14:20:18.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 17 14:20:21.442: INFO: stderr: "" May 17 14:20:21.442: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32769/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:20:21.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-596" for this suite. May 17 14:20:27.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:20:27.544: INFO: namespace kubectl-596 deletion completed in 6.098774802s • [SLOW TEST:8.781 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl cluster-info /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:20:27.544: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1376, will wait for the garbage collector to delete the pods May 17 14:20:33.670: INFO: Deleting Job.batch foo took: 5.981659ms May 17 14:20:33.971: INFO: Terminating Job.batch foo pods took: 300.251809ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:21:12.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1376" for this suite. May 17 14:21:18.310: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:21:18.378: INFO: namespace job-1376 deletion completed in 6.100870313s • [SLOW TEST:50.834 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:21:18.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components May 17 14:21:18.466: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend May 17 14:21:18.466: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4398' May 17 14:21:18.844: INFO: stderr: "" May 17 14:21:18.844: INFO: stdout: "service/redis-slave created\n" May 17 14:21:18.844: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend May 17 14:21:18.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4398' May 17 14:21:19.173: INFO: stderr: "" May 17 14:21:19.173: INFO: stdout: "service/redis-master created\n" May 17 14:21:19.173: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 17 14:21:19.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4398' May 17 14:21:19.556: INFO: stderr: "" May 17 14:21:19.556: INFO: stdout: "service/frontend created\n" May 17 14:21:19.556: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 May 17 14:21:19.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4398' May 17 14:21:19.844: INFO: stderr: "" May 17 14:21:19.844: INFO: stdout: "deployment.apps/frontend created\n" May 17 14:21:19.845: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 17 14:21:19.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4398' May 17 14:21:20.140: INFO: stderr: "" May 17 14:21:20.141: INFO: stdout: "deployment.apps/redis-master created\n" May 17 14:21:20.141: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 May 17 14:21:20.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4398' May 17 14:21:20.433: INFO: stderr: "" May 17 14:21:20.433: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app May 17 14:21:20.433: INFO: Waiting for all frontend pods to be Running. May 17 14:21:30.484: INFO: Waiting for frontend to serve content. May 17 14:21:30.505: INFO: Trying to add a new entry to the guestbook. May 17 14:21:30.521: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 17 14:21:30.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4398' May 17 14:21:30.703: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 14:21:30.703: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources May 17 14:21:30.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4398' May 17 14:21:30.837: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 14:21:30.837: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 17 14:21:30.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4398' May 17 14:21:30.950: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 14:21:30.950: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 17 14:21:30.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4398' May 17 14:21:31.049: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 14:21:31.049: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 17 14:21:31.049: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4398' May 17 14:21:31.143: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 14:21:31.143: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources May 17 14:21:31.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4398' May 17 14:21:31.283: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 17 14:21:31.283: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:21:31.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4398" for this suite. May 17 14:22:13.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:22:13.486: INFO: namespace kubectl-4398 deletion completed in 42.190502586s • [SLOW TEST:55.108 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:22:13.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service endpoint-test2 in namespace services-6513 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6513 to expose endpoints map[] May 17 14:22:13.579: INFO: Get endpoints failed (3.388072ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 17 14:22:14.583: INFO: successfully validated that service endpoint-test2 in namespace services-6513 exposes endpoints map[] (1.008081011s elapsed) STEP: Creating pod pod1 in namespace services-6513 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6513 to expose endpoints map[pod1:[80]] May 17 14:22:17.671: INFO: successfully validated that service endpoint-test2 in namespace services-6513 exposes endpoints map[pod1:[80]] (3.080482996s elapsed) STEP: Creating pod pod2 in namespace services-6513 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6513 to expose endpoints map[pod1:[80] pod2:[80]] May 17 14:22:21.775: INFO: successfully validated that service endpoint-test2 in namespace services-6513 exposes endpoints map[pod1:[80] pod2:[80]] (4.100139424s elapsed) STEP: Deleting pod pod1 in namespace services-6513 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6513 to expose endpoints map[pod2:[80]] May 17 14:22:22.840: INFO: successfully validated that service endpoint-test2 in namespace services-6513 exposes endpoints map[pod2:[80]] (1.061639477s elapsed) STEP: Deleting pod pod2 in namespace services-6513 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-6513 to expose endpoints map[] May 17 14:22:23.850: INFO: successfully validated that service endpoint-test2 in namespace services-6513 exposes endpoints map[] (1.006603083s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:22:23.903: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6513" for this suite. May 17 14:22:45.952: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:22:46.080: INFO: namespace services-6513 deletion completed in 22.148214773s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.594 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:22:46.081: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 17 14:22:50.355: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:22:50.400: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-9351" for this suite. May 17 14:22:56.688: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:22:56.824: INFO: namespace container-runtime-9351 deletion completed in 6.420118855s • [SLOW TEST:10.743 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129 should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:22:56.825: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:23:02.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-7546" for this suite. May 17 14:23:08.484: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:23:08.590: INFO: namespace watch-7546 deletion completed in 6.2082143s • [SLOW TEST:11.766 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:23:08.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 17 14:23:18.708: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1559 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 14:23:18.708: INFO: >>> kubeConfig: /root/.kube/config I0517 14:23:18.745964 6 log.go:172] (0xc0028c09a0) (0xc002c3ea00) Create stream I0517 14:23:18.745990 6 log.go:172] (0xc0028c09a0) (0xc002c3ea00) Stream added, broadcasting: 1 I0517 14:23:18.748522 6 log.go:172] (0xc0028c09a0) Reply frame received for 1 I0517 14:23:18.748560 6 log.go:172] (0xc0028c09a0) (0xc0023320a0) Create stream I0517 14:23:18.748581 6 log.go:172] (0xc0028c09a0) (0xc0023320a0) Stream added, broadcasting: 3 I0517 14:23:18.749722 6 log.go:172] (0xc0028c09a0) Reply frame received for 3 I0517 14:23:18.749771 6 log.go:172] (0xc0028c09a0) (0xc002c3eaa0) Create stream I0517 14:23:18.749786 6 log.go:172] (0xc0028c09a0) (0xc002c3eaa0) Stream added, broadcasting: 5 I0517 14:23:18.750687 6 log.go:172] (0xc0028c09a0) Reply frame received for 5 I0517 14:23:18.834980 6 log.go:172] (0xc0028c09a0) Data frame received for 3 I0517 14:23:18.835007 6 log.go:172] (0xc0023320a0) (3) Data frame handling I0517 14:23:18.835015 6 log.go:172] (0xc0023320a0) (3) Data frame sent I0517 14:23:18.835020 6 log.go:172] (0xc0028c09a0) Data frame received for 3 I0517 14:23:18.835025 6 log.go:172] (0xc0023320a0) (3) Data frame handling I0517 14:23:18.835037 6 log.go:172] (0xc0028c09a0) Data frame received for 5 I0517 14:23:18.835048 6 log.go:172] (0xc002c3eaa0) (5) Data frame handling I0517 14:23:18.837018 6 log.go:172] (0xc0028c09a0) Data frame received for 1 I0517 14:23:18.837032 6 log.go:172] (0xc002c3ea00) (1) Data frame handling I0517 14:23:18.837043 6 log.go:172] (0xc002c3ea00) (1) Data frame sent I0517 14:23:18.837501 6 log.go:172] (0xc0028c09a0) (0xc002c3ea00) Stream removed, broadcasting: 1 I0517 14:23:18.837578 6 log.go:172] (0xc0028c09a0) (0xc002c3ea00) Stream removed, broadcasting: 1 I0517 14:23:18.837589 6 log.go:172] (0xc0028c09a0) (0xc0023320a0) Stream removed, broadcasting: 3 I0517 14:23:18.837721 6 log.go:172] (0xc0028c09a0) Go away received I0517 14:23:18.837754 6 log.go:172] (0xc0028c09a0) (0xc002c3eaa0) Stream removed, broadcasting: 5 May 17 14:23:18.837: INFO: Exec stderr: "" May 17 14:23:18.837: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1559 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 14:23:18.837: INFO: >>> kubeConfig: /root/.kube/config I0517 14:23:18.866554 6 log.go:172] (0xc00342cc60) (0xc002332460) Create stream I0517 14:23:18.866579 6 log.go:172] (0xc00342cc60) (0xc002332460) Stream added, broadcasting: 1 I0517 14:23:18.868756 6 log.go:172] (0xc00342cc60) Reply frame received for 1 I0517 14:23:18.868816 6 log.go:172] (0xc00342cc60) (0xc002332500) Create stream I0517 14:23:18.868833 6 log.go:172] (0xc00342cc60) (0xc002332500) Stream added, broadcasting: 3 I0517 14:23:18.869828 6 log.go:172] (0xc00342cc60) Reply frame received for 3 I0517 14:23:18.869866 6 log.go:172] (0xc00342cc60) (0xc002a5caa0) Create stream I0517 14:23:18.869881 6 log.go:172] (0xc00342cc60) (0xc002a5caa0) Stream added, broadcasting: 5 I0517 14:23:18.870655 6 log.go:172] (0xc00342cc60) Reply frame received for 5 I0517 14:23:18.939806 6 log.go:172] (0xc00342cc60) Data frame received for 3 I0517 14:23:18.939852 6 log.go:172] (0xc002332500) (3) Data frame handling I0517 14:23:18.939872 6 log.go:172] (0xc002332500) (3) Data frame sent I0517 14:23:18.939884 6 log.go:172] (0xc00342cc60) Data frame received for 3 I0517 14:23:18.939893 6 log.go:172] (0xc002332500) (3) Data frame handling I0517 14:23:18.939921 6 log.go:172] (0xc00342cc60) Data frame received for 5 I0517 14:23:18.939940 6 log.go:172] (0xc002a5caa0) (5) Data frame handling I0517 14:23:18.941430 6 log.go:172] (0xc00342cc60) Data frame received for 1 I0517 14:23:18.941447 6 log.go:172] (0xc002332460) (1) Data frame handling I0517 14:23:18.941459 6 log.go:172] (0xc002332460) (1) Data frame sent I0517 14:23:18.941467 6 log.go:172] (0xc00342cc60) (0xc002332460) Stream removed, broadcasting: 1 I0517 14:23:18.941530 6 log.go:172] (0xc00342cc60) (0xc002332460) Stream removed, broadcasting: 1 I0517 14:23:18.941539 6 log.go:172] (0xc00342cc60) (0xc002332500) Stream removed, broadcasting: 3 I0517 14:23:18.941622 6 log.go:172] (0xc00342cc60) (0xc002a5caa0) Stream removed, broadcasting: 5 I0517 14:23:18.941714 6 log.go:172] (0xc00342cc60) Go away received May 17 14:23:18.941: INFO: Exec stderr: "" May 17 14:23:18.941: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1559 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 14:23:18.941: INFO: >>> kubeConfig: /root/.kube/config I0517 14:23:18.968683 6 log.go:172] (0xc00342dce0) (0xc0023328c0) Create stream I0517 14:23:18.968710 6 log.go:172] (0xc00342dce0) (0xc0023328c0) Stream added, broadcasting: 1 I0517 14:23:18.978486 6 log.go:172] (0xc00342dce0) Reply frame received for 1 I0517 14:23:18.978557 6 log.go:172] (0xc00342dce0) (0xc002c3eb40) Create stream I0517 14:23:18.978575 6 log.go:172] (0xc00342dce0) (0xc002c3eb40) Stream added, broadcasting: 3 I0517 14:23:18.979598 6 log.go:172] (0xc00342dce0) Reply frame received for 3 I0517 14:23:18.979653 6 log.go:172] (0xc00342dce0) (0xc002839ea0) Create stream I0517 14:23:18.979687 6 log.go:172] (0xc00342dce0) (0xc002839ea0) Stream added, broadcasting: 5 I0517 14:23:18.980729 6 log.go:172] (0xc00342dce0) Reply frame received for 5 I0517 14:23:19.040643 6 log.go:172] (0xc00342dce0) Data frame received for 5 I0517 14:23:19.040674 6 log.go:172] (0xc002839ea0) (5) Data frame handling I0517 14:23:19.040692 6 log.go:172] (0xc00342dce0) Data frame received for 3 I0517 14:23:19.040705 6 log.go:172] (0xc002c3eb40) (3) Data frame handling I0517 14:23:19.040712 6 log.go:172] (0xc002c3eb40) (3) Data frame sent I0517 14:23:19.040719 6 log.go:172] (0xc00342dce0) Data frame received for 3 I0517 14:23:19.040723 6 log.go:172] (0xc002c3eb40) (3) Data frame handling I0517 14:23:19.042093 6 log.go:172] (0xc00342dce0) Data frame received for 1 I0517 14:23:19.042115 6 log.go:172] (0xc0023328c0) (1) Data frame handling I0517 14:23:19.042128 6 log.go:172] (0xc0023328c0) (1) Data frame sent I0517 14:23:19.042142 6 log.go:172] (0xc00342dce0) (0xc0023328c0) Stream removed, broadcasting: 1 I0517 14:23:19.042159 6 log.go:172] (0xc00342dce0) Go away received I0517 14:23:19.042319 6 log.go:172] (0xc00342dce0) (0xc0023328c0) Stream removed, broadcasting: 1 I0517 14:23:19.042341 6 log.go:172] (0xc00342dce0) (0xc002c3eb40) Stream removed, broadcasting: 3 I0517 14:23:19.042349 6 log.go:172] (0xc00342dce0) (0xc002839ea0) Stream removed, broadcasting: 5 May 17 14:23:19.042: INFO: Exec stderr: "" May 17 14:23:19.042: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1559 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 14:23:19.042: INFO: >>> kubeConfig: /root/.kube/config I0517 14:23:19.072876 6 log.go:172] (0xc000e9ac60) (0xc00221d4a0) Create stream I0517 14:23:19.072916 6 log.go:172] (0xc000e9ac60) (0xc00221d4a0) Stream added, broadcasting: 1 I0517 14:23:19.075551 6 log.go:172] (0xc000e9ac60) Reply frame received for 1 I0517 14:23:19.075593 6 log.go:172] (0xc000e9ac60) (0xc00221d540) Create stream I0517 14:23:19.075608 6 log.go:172] (0xc000e9ac60) (0xc00221d540) Stream added, broadcasting: 3 I0517 14:23:19.076581 6 log.go:172] (0xc000e9ac60) Reply frame received for 3 I0517 14:23:19.076622 6 log.go:172] (0xc000e9ac60) (0xc002332960) Create stream I0517 14:23:19.076636 6 log.go:172] (0xc000e9ac60) (0xc002332960) Stream added, broadcasting: 5 I0517 14:23:19.077757 6 log.go:172] (0xc000e9ac60) Reply frame received for 5 I0517 14:23:19.137994 6 log.go:172] (0xc000e9ac60) Data frame received for 5 I0517 14:23:19.138045 6 log.go:172] (0xc002332960) (5) Data frame handling I0517 14:23:19.138079 6 log.go:172] (0xc000e9ac60) Data frame received for 3 I0517 14:23:19.138098 6 log.go:172] (0xc00221d540) (3) Data frame handling I0517 14:23:19.138150 6 log.go:172] (0xc00221d540) (3) Data frame sent I0517 14:23:19.138183 6 log.go:172] (0xc000e9ac60) Data frame received for 3 I0517 14:23:19.138199 6 log.go:172] (0xc00221d540) (3) Data frame handling I0517 14:23:19.139650 6 log.go:172] (0xc000e9ac60) Data frame received for 1 I0517 14:23:19.139681 6 log.go:172] (0xc00221d4a0) (1) Data frame handling I0517 14:23:19.139707 6 log.go:172] (0xc00221d4a0) (1) Data frame sent I0517 14:23:19.139727 6 log.go:172] (0xc000e9ac60) (0xc00221d4a0) Stream removed, broadcasting: 1 I0517 14:23:19.139755 6 log.go:172] (0xc000e9ac60) Go away received I0517 14:23:19.139846 6 log.go:172] (0xc000e9ac60) (0xc00221d4a0) Stream removed, broadcasting: 1 I0517 14:23:19.139867 6 log.go:172] (0xc000e9ac60) (0xc00221d540) Stream removed, broadcasting: 3 I0517 14:23:19.139880 6 log.go:172] (0xc000e9ac60) (0xc002332960) Stream removed, broadcasting: 5 May 17 14:23:19.139: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 17 14:23:19.139: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1559 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 14:23:19.139: INFO: >>> kubeConfig: /root/.kube/config I0517 14:23:19.175319 6 log.go:172] (0xc0028c1ce0) (0xc002c3ee60) Create stream I0517 14:23:19.175343 6 log.go:172] (0xc0028c1ce0) (0xc002c3ee60) Stream added, broadcasting: 1 I0517 14:23:19.185089 6 log.go:172] (0xc0028c1ce0) Reply frame received for 1 I0517 14:23:19.185318 6 log.go:172] (0xc0028c1ce0) (0xc002838000) Create stream I0517 14:23:19.185333 6 log.go:172] (0xc0028c1ce0) (0xc002838000) Stream added, broadcasting: 3 I0517 14:23:19.186276 6 log.go:172] (0xc0028c1ce0) Reply frame received for 3 I0517 14:23:19.186307 6 log.go:172] (0xc0028c1ce0) (0xc000b62000) Create stream I0517 14:23:19.186321 6 log.go:172] (0xc0028c1ce0) (0xc000b62000) Stream added, broadcasting: 5 I0517 14:23:19.187416 6 log.go:172] (0xc0028c1ce0) Reply frame received for 5 I0517 14:23:19.250231 6 log.go:172] (0xc0028c1ce0) Data frame received for 3 I0517 14:23:19.250260 6 log.go:172] (0xc002838000) (3) Data frame handling I0517 14:23:19.250286 6 log.go:172] (0xc002838000) (3) Data frame sent I0517 14:23:19.250303 6 log.go:172] (0xc0028c1ce0) Data frame received for 3 I0517 14:23:19.250309 6 log.go:172] (0xc002838000) (3) Data frame handling I0517 14:23:19.250410 6 log.go:172] (0xc0028c1ce0) Data frame received for 5 I0517 14:23:19.250453 6 log.go:172] (0xc000b62000) (5) Data frame handling I0517 14:23:19.251516 6 log.go:172] (0xc0028c1ce0) Data frame received for 1 I0517 14:23:19.251538 6 log.go:172] (0xc002c3ee60) (1) Data frame handling I0517 14:23:19.251559 6 log.go:172] (0xc002c3ee60) (1) Data frame sent I0517 14:23:19.251694 6 log.go:172] (0xc0028c1ce0) (0xc002c3ee60) Stream removed, broadcasting: 1 I0517 14:23:19.251735 6 log.go:172] (0xc0028c1ce0) Go away received I0517 14:23:19.251839 6 log.go:172] (0xc0028c1ce0) (0xc002c3ee60) Stream removed, broadcasting: 1 I0517 14:23:19.251876 6 log.go:172] (0xc0028c1ce0) (0xc002838000) Stream removed, broadcasting: 3 I0517 14:23:19.251900 6 log.go:172] (0xc0028c1ce0) (0xc000b62000) Stream removed, broadcasting: 5 May 17 14:23:19.251: INFO: Exec stderr: "" May 17 14:23:19.251: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1559 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 14:23:19.251: INFO: >>> kubeConfig: /root/.kube/config I0517 14:23:19.284677 6 log.go:172] (0xc003bb8a50) (0xc000b62500) Create stream I0517 14:23:19.284706 6 log.go:172] (0xc003bb8a50) (0xc000b62500) Stream added, broadcasting: 1 I0517 14:23:19.287785 6 log.go:172] (0xc003bb8a50) Reply frame received for 1 I0517 14:23:19.287863 6 log.go:172] (0xc003bb8a50) (0xc000b62640) Create stream I0517 14:23:19.287917 6 log.go:172] (0xc003bb8a50) (0xc000b62640) Stream added, broadcasting: 3 I0517 14:23:19.288974 6 log.go:172] (0xc003bb8a50) Reply frame received for 3 I0517 14:23:19.289015 6 log.go:172] (0xc003bb8a50) (0xc0005ee140) Create stream I0517 14:23:19.289035 6 log.go:172] (0xc003bb8a50) (0xc0005ee140) Stream added, broadcasting: 5 I0517 14:23:19.290073 6 log.go:172] (0xc003bb8a50) Reply frame received for 5 I0517 14:23:19.335679 6 log.go:172] (0xc003bb8a50) Data frame received for 5 I0517 14:23:19.335708 6 log.go:172] (0xc0005ee140) (5) Data frame handling I0517 14:23:19.335725 6 log.go:172] (0xc003bb8a50) Data frame received for 3 I0517 14:23:19.335732 6 log.go:172] (0xc000b62640) (3) Data frame handling I0517 14:23:19.335742 6 log.go:172] (0xc000b62640) (3) Data frame sent I0517 14:23:19.335749 6 log.go:172] (0xc003bb8a50) Data frame received for 3 I0517 14:23:19.335756 6 log.go:172] (0xc000b62640) (3) Data frame handling I0517 14:23:19.336723 6 log.go:172] (0xc003bb8a50) Data frame received for 1 I0517 14:23:19.336747 6 log.go:172] (0xc000b62500) (1) Data frame handling I0517 14:23:19.336763 6 log.go:172] (0xc000b62500) (1) Data frame sent I0517 14:23:19.336774 6 log.go:172] (0xc003bb8a50) (0xc000b62500) Stream removed, broadcasting: 1 I0517 14:23:19.336888 6 log.go:172] (0xc003bb8a50) (0xc000b62500) Stream removed, broadcasting: 1 I0517 14:23:19.336905 6 log.go:172] (0xc003bb8a50) (0xc000b62640) Stream removed, broadcasting: 3 I0517 14:23:19.336913 6 log.go:172] (0xc003bb8a50) (0xc0005ee140) Stream removed, broadcasting: 5 May 17 14:23:19.336: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 17 14:23:19.336: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1559 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 14:23:19.337: INFO: >>> kubeConfig: /root/.kube/config I0517 14:23:19.337028 6 log.go:172] (0xc003bb8a50) Go away received I0517 14:23:19.362616 6 log.go:172] (0xc001d748f0) (0xc0005ee500) Create stream I0517 14:23:19.362645 6 log.go:172] (0xc001d748f0) (0xc0005ee500) Stream added, broadcasting: 1 I0517 14:23:19.364633 6 log.go:172] (0xc001d748f0) Reply frame received for 1 I0517 14:23:19.364702 6 log.go:172] (0xc001d748f0) (0xc000b62a00) Create stream I0517 14:23:19.364729 6 log.go:172] (0xc001d748f0) (0xc000b62a00) Stream added, broadcasting: 3 I0517 14:23:19.365806 6 log.go:172] (0xc001d748f0) Reply frame received for 3 I0517 14:23:19.365851 6 log.go:172] (0xc001d748f0) (0xc001882000) Create stream I0517 14:23:19.365868 6 log.go:172] (0xc001d748f0) (0xc001882000) Stream added, broadcasting: 5 I0517 14:23:19.366619 6 log.go:172] (0xc001d748f0) Reply frame received for 5 I0517 14:23:19.427038 6 log.go:172] (0xc001d748f0) Data frame received for 5 I0517 14:23:19.427075 6 log.go:172] (0xc001882000) (5) Data frame handling I0517 14:23:19.427093 6 log.go:172] (0xc001d748f0) Data frame received for 3 I0517 14:23:19.427104 6 log.go:172] (0xc000b62a00) (3) Data frame handling I0517 14:23:19.427111 6 log.go:172] (0xc000b62a00) (3) Data frame sent I0517 14:23:19.427118 6 log.go:172] (0xc001d748f0) Data frame received for 3 I0517 14:23:19.427126 6 log.go:172] (0xc000b62a00) (3) Data frame handling I0517 14:23:19.428397 6 log.go:172] (0xc001d748f0) Data frame received for 1 I0517 14:23:19.428413 6 log.go:172] (0xc0005ee500) (1) Data frame handling I0517 14:23:19.428421 6 log.go:172] (0xc0005ee500) (1) Data frame sent I0517 14:23:19.428434 6 log.go:172] (0xc001d748f0) (0xc0005ee500) Stream removed, broadcasting: 1 I0517 14:23:19.428447 6 log.go:172] (0xc001d748f0) Go away received I0517 14:23:19.428583 6 log.go:172] (0xc001d748f0) (0xc0005ee500) Stream removed, broadcasting: 1 I0517 14:23:19.428611 6 log.go:172] (0xc001d748f0) (0xc000b62a00) Stream removed, broadcasting: 3 I0517 14:23:19.428625 6 log.go:172] (0xc001d748f0) (0xc001882000) Stream removed, broadcasting: 5 May 17 14:23:19.428: INFO: Exec stderr: "" May 17 14:23:19.428: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1559 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 14:23:19.428: INFO: >>> kubeConfig: /root/.kube/config I0517 14:23:19.453965 6 log.go:172] (0xc00028d4a0) (0xc002838460) Create stream I0517 14:23:19.453996 6 log.go:172] (0xc00028d4a0) (0xc002838460) Stream added, broadcasting: 1 I0517 14:23:19.456340 6 log.go:172] (0xc00028d4a0) Reply frame received for 1 I0517 14:23:19.456369 6 log.go:172] (0xc00028d4a0) (0xc000b62aa0) Create stream I0517 14:23:19.456376 6 log.go:172] (0xc00028d4a0) (0xc000b62aa0) Stream added, broadcasting: 3 I0517 14:23:19.457640 6 log.go:172] (0xc00028d4a0) Reply frame received for 3 I0517 14:23:19.457699 6 log.go:172] (0xc00028d4a0) (0xc0005ee780) Create stream I0517 14:23:19.457721 6 log.go:172] (0xc00028d4a0) (0xc0005ee780) Stream added, broadcasting: 5 I0517 14:23:19.458638 6 log.go:172] (0xc00028d4a0) Reply frame received for 5 I0517 14:23:19.515893 6 log.go:172] (0xc00028d4a0) Data frame received for 3 I0517 14:23:19.515985 6 log.go:172] (0xc000b62aa0) (3) Data frame handling I0517 14:23:19.516049 6 log.go:172] (0xc000b62aa0) (3) Data frame sent I0517 14:23:19.516075 6 log.go:172] (0xc00028d4a0) Data frame received for 3 I0517 14:23:19.516110 6 log.go:172] (0xc00028d4a0) Data frame received for 5 I0517 14:23:19.516170 6 log.go:172] (0xc0005ee780) (5) Data frame handling I0517 14:23:19.516202 6 log.go:172] (0xc000b62aa0) (3) Data frame handling I0517 14:23:19.517787 6 log.go:172] (0xc00028d4a0) Data frame received for 1 I0517 14:23:19.517804 6 log.go:172] (0xc002838460) (1) Data frame handling I0517 14:23:19.517814 6 log.go:172] (0xc002838460) (1) Data frame sent I0517 14:23:19.518003 6 log.go:172] (0xc00028d4a0) (0xc002838460) Stream removed, broadcasting: 1 I0517 14:23:19.518076 6 log.go:172] (0xc00028d4a0) Go away received I0517 14:23:19.518166 6 log.go:172] (0xc00028d4a0) (0xc002838460) Stream removed, broadcasting: 1 I0517 14:23:19.518187 6 log.go:172] (0xc00028d4a0) (0xc000b62aa0) Stream removed, broadcasting: 3 I0517 14:23:19.518203 6 log.go:172] (0xc00028d4a0) (0xc0005ee780) Stream removed, broadcasting: 5 May 17 14:23:19.518: INFO: Exec stderr: "" May 17 14:23:19.518: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-1559 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 14:23:19.518: INFO: >>> kubeConfig: /root/.kube/config I0517 14:23:19.548361 6 log.go:172] (0xc001e660b0) (0xc000b63400) Create stream I0517 14:23:19.548384 6 log.go:172] (0xc001e660b0) (0xc000b63400) Stream added, broadcasting: 1 I0517 14:23:19.550972 6 log.go:172] (0xc001e660b0) Reply frame received for 1 I0517 14:23:19.551030 6 log.go:172] (0xc001e660b0) (0xc0005ee960) Create stream I0517 14:23:19.551050 6 log.go:172] (0xc001e660b0) (0xc0005ee960) Stream added, broadcasting: 3 I0517 14:23:19.551998 6 log.go:172] (0xc001e660b0) Reply frame received for 3 I0517 14:23:19.552033 6 log.go:172] (0xc001e660b0) (0xc0028385a0) Create stream I0517 14:23:19.552049 6 log.go:172] (0xc001e660b0) (0xc0028385a0) Stream added, broadcasting: 5 I0517 14:23:19.553030 6 log.go:172] (0xc001e660b0) Reply frame received for 5 I0517 14:23:19.610688 6 log.go:172] (0xc001e660b0) Data frame received for 5 I0517 14:23:19.610738 6 log.go:172] (0xc0028385a0) (5) Data frame handling I0517 14:23:19.610776 6 log.go:172] (0xc001e660b0) Data frame received for 3 I0517 14:23:19.610806 6 log.go:172] (0xc0005ee960) (3) Data frame handling I0517 14:23:19.610826 6 log.go:172] (0xc0005ee960) (3) Data frame sent I0517 14:23:19.610839 6 log.go:172] (0xc001e660b0) Data frame received for 3 I0517 14:23:19.610848 6 log.go:172] (0xc0005ee960) (3) Data frame handling I0517 14:23:19.612072 6 log.go:172] (0xc001e660b0) Data frame received for 1 I0517 14:23:19.612091 6 log.go:172] (0xc000b63400) (1) Data frame handling I0517 14:23:19.612102 6 log.go:172] (0xc000b63400) (1) Data frame sent I0517 14:23:19.612115 6 log.go:172] (0xc001e660b0) (0xc000b63400) Stream removed, broadcasting: 1 I0517 14:23:19.612131 6 log.go:172] (0xc001e660b0) Go away received I0517 14:23:19.612214 6 log.go:172] (0xc001e660b0) (0xc000b63400) Stream removed, broadcasting: 1 I0517 14:23:19.612231 6 log.go:172] (0xc001e660b0) (0xc0005ee960) Stream removed, broadcasting: 3 I0517 14:23:19.612239 6 log.go:172] (0xc001e660b0) (0xc0028385a0) Stream removed, broadcasting: 5 May 17 14:23:19.612: INFO: Exec stderr: "" May 17 14:23:19.612: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-1559 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 14:23:19.612: INFO: >>> kubeConfig: /root/.kube/config I0517 14:23:19.641392 6 log.go:172] (0xc001034160) (0xc0028388c0) Create stream I0517 14:23:19.641428 6 log.go:172] (0xc001034160) (0xc0028388c0) Stream added, broadcasting: 1 I0517 14:23:19.644232 6 log.go:172] (0xc001034160) Reply frame received for 1 I0517 14:23:19.644273 6 log.go:172] (0xc001034160) (0xc0005eef00) Create stream I0517 14:23:19.644288 6 log.go:172] (0xc001034160) (0xc0005eef00) Stream added, broadcasting: 3 I0517 14:23:19.645082 6 log.go:172] (0xc001034160) Reply frame received for 3 I0517 14:23:19.645217 6 log.go:172] (0xc001034160) (0xc0005eefa0) Create stream I0517 14:23:19.645231 6 log.go:172] (0xc001034160) (0xc0005eefa0) Stream added, broadcasting: 5 I0517 14:23:19.646021 6 log.go:172] (0xc001034160) Reply frame received for 5 I0517 14:23:19.721460 6 log.go:172] (0xc001034160) Data frame received for 5 I0517 14:23:19.721571 6 log.go:172] (0xc0005eefa0) (5) Data frame handling I0517 14:23:19.721611 6 log.go:172] (0xc001034160) Data frame received for 3 I0517 14:23:19.721626 6 log.go:172] (0xc0005eef00) (3) Data frame handling I0517 14:23:19.721637 6 log.go:172] (0xc0005eef00) (3) Data frame sent I0517 14:23:19.721646 6 log.go:172] (0xc001034160) Data frame received for 3 I0517 14:23:19.721654 6 log.go:172] (0xc0005eef00) (3) Data frame handling I0517 14:23:19.722684 6 log.go:172] (0xc001034160) Data frame received for 1 I0517 14:23:19.722715 6 log.go:172] (0xc0028388c0) (1) Data frame handling I0517 14:23:19.722731 6 log.go:172] (0xc0028388c0) (1) Data frame sent I0517 14:23:19.722836 6 log.go:172] (0xc001034160) (0xc0028388c0) Stream removed, broadcasting: 1 I0517 14:23:19.722866 6 log.go:172] (0xc001034160) Go away received I0517 14:23:19.722948 6 log.go:172] (0xc001034160) (0xc0028388c0) Stream removed, broadcasting: 1 I0517 14:23:19.722974 6 log.go:172] (0xc001034160) (0xc0005eef00) Stream removed, broadcasting: 3 I0517 14:23:19.722989 6 log.go:172] (0xc001034160) (0xc0005eefa0) Stream removed, broadcasting: 5 May 17 14:23:19.722: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:23:19.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-1559" for this suite. May 17 14:24:05.755: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:24:05.833: INFO: namespace e2e-kubelet-etc-hosts-1559 deletion completed in 46.106360847s • [SLOW TEST:57.243 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:24:05.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 17 14:24:05.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-1746' May 17 14:24:06.050: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 17 14:24:06.050: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created May 17 14:24:06.063: INFO: Waiting for rc e2e-test-nginx-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 17 14:24:06.075: INFO: scanned /root for discovery docs: May 17 14:24:06.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-1746' May 17 14:24:21.989: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 17 14:24:21.989: INFO: stdout: "Created e2e-test-nginx-rc-ebb8710677f665e94306b8d0423b8f92\nScaling up e2e-test-nginx-rc-ebb8710677f665e94306b8d0423b8f92 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ebb8710677f665e94306b8d0423b8f92 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ebb8710677f665e94306b8d0423b8f92 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" May 17 14:24:21.989: INFO: stdout: "Created e2e-test-nginx-rc-ebb8710677f665e94306b8d0423b8f92\nScaling up e2e-test-nginx-rc-ebb8710677f665e94306b8d0423b8f92 from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-ebb8710677f665e94306b8d0423b8f92 up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-ebb8710677f665e94306b8d0423b8f92 to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up. May 17 14:24:21.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1746' May 17 14:24:22.094: INFO: stderr: "" May 17 14:24:22.094: INFO: stdout: "e2e-test-nginx-rc-2ftzb e2e-test-nginx-rc-ebb8710677f665e94306b8d0423b8f92-9tsrp " STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2 May 17 14:24:27.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-1746' May 17 14:24:27.203: INFO: stderr: "" May 17 14:24:27.203: INFO: stdout: "e2e-test-nginx-rc-ebb8710677f665e94306b8d0423b8f92-9tsrp " May 17 14:24:27.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ebb8710677f665e94306b8d0423b8f92-9tsrp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1746' May 17 14:24:27.299: INFO: stderr: "" May 17 14:24:27.299: INFO: stdout: "true" May 17 14:24:27.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-ebb8710677f665e94306b8d0423b8f92-9tsrp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1746' May 17 14:24:27.389: INFO: stderr: "" May 17 14:24:27.389: INFO: stdout: "docker.io/library/nginx:1.14-alpine" May 17 14:24:27.389: INFO: e2e-test-nginx-rc-ebb8710677f665e94306b8d0423b8f92-9tsrp is verified up and running [AfterEach] [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522 May 17 14:24:27.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-1746' May 17 14:24:27.509: INFO: stderr: "" May 17 14:24:27.509: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:24:27.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1746" for this suite. May 17 14:24:33.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:24:33.600: INFO: namespace kubectl-1746 deletion completed in 6.088526086s • [SLOW TEST:27.767 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:24:33.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name cm-test-opt-del-f7dfcff3-0552-48c2-897b-0ee7a2512aec STEP: Creating configMap with name cm-test-opt-upd-3619d1ca-4b94-4d96-a42d-9836ecd8ca14 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-f7dfcff3-0552-48c2-897b-0ee7a2512aec STEP: Updating configmap cm-test-opt-upd-3619d1ca-4b94-4d96-a42d-9836ecd8ca14 STEP: Creating configMap with name cm-test-opt-create-e6a5a7d1-1d13-4d58-ad4f-eb5b373fd814 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:24:41.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9153" for this suite. May 17 14:25:03.856: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:25:03.925: INFO: namespace projected-9153 deletion completed in 22.079592941s • [SLOW TEST:30.324 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:25:03.926: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating server pod server in namespace prestop-6065 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6065 STEP: Deleting pre-stop pod May 17 14:25:17.054: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:25:17.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6065" for this suite. May 17 14:25:55.107: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:25:55.192: INFO: namespace prestop-6065 deletion completed in 38.108800551s • [SLOW TEST:51.266 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:25:55.192: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0666 on node default medium May 17 14:25:55.281: INFO: Waiting up to 5m0s for pod "pod-eeafbb6d-ed0c-4fe4-b6de-c7514c96204d" in namespace "emptydir-1886" to be "success or failure" May 17 14:25:55.284: INFO: Pod "pod-eeafbb6d-ed0c-4fe4-b6de-c7514c96204d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.960365ms May 17 14:25:57.288: INFO: Pod "pod-eeafbb6d-ed0c-4fe4-b6de-c7514c96204d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006719009s May 17 14:25:59.292: INFO: Pod "pod-eeafbb6d-ed0c-4fe4-b6de-c7514c96204d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010898349s STEP: Saw pod success May 17 14:25:59.292: INFO: Pod "pod-eeafbb6d-ed0c-4fe4-b6de-c7514c96204d" satisfied condition "success or failure" May 17 14:25:59.294: INFO: Trying to get logs from node iruya-worker pod pod-eeafbb6d-ed0c-4fe4-b6de-c7514c96204d container test-container: STEP: delete the pod May 17 14:25:59.321: INFO: Waiting for pod pod-eeafbb6d-ed0c-4fe4-b6de-c7514c96204d to disappear May 17 14:25:59.357: INFO: Pod pod-eeafbb6d-ed0c-4fe4-b6de-c7514c96204d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:25:59.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1886" for this suite. May 17 14:26:05.383: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:26:05.482: INFO: namespace emptydir-1886 deletion completed in 6.122257868s • [SLOW TEST:10.291 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:26:05.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod May 17 14:26:10.086: INFO: Successfully updated pod "labelsupdate9c5ca1e8-df37-4dc4-a6ba-8b968906f63a" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:26:12.154: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8255" for this suite. May 17 14:26:34.168: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:26:34.248: INFO: namespace projected-8255 deletion completed in 22.089502938s • [SLOW TEST:28.765 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:26:34.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-3161 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-3161 STEP: Creating statefulset with conflicting port in namespace statefulset-3161 STEP: Waiting until pod test-pod will start running in namespace statefulset-3161 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-3161 May 17 14:26:40.376: INFO: Observed stateful pod in namespace: statefulset-3161, name: ss-0, uid: 7f43e782-f029-42f9-a76c-f6fd5446157d, status phase: Failed. Waiting for statefulset controller to delete. May 17 14:26:40.381: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-3161 STEP: Removing pod with conflicting port in namespace statefulset-3161 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-3161 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 17 14:26:46.463: INFO: Deleting all statefulset in ns statefulset-3161 May 17 14:26:46.466: INFO: Scaling statefulset ss to 0 May 17 14:26:56.536: INFO: Waiting for statefulset status.replicas updated to 0 May 17 14:26:56.540: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:26:56.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-3161" for this suite. May 17 14:27:02.576: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:27:02.654: INFO: namespace statefulset-3161 deletion completed in 6.089288802s • [SLOW TEST:28.404 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:27:02.655: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:27:08.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4023" for this suite. May 17 14:27:15.025: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:27:15.094: INFO: namespace namespaces-4023 deletion completed in 6.093290406s STEP: Destroying namespace "nsdeletetest-5536" for this suite. May 17 14:27:15.096: INFO: Namespace nsdeletetest-5536 was already deleted STEP: Destroying namespace "nsdeletetest-3721" for this suite. May 17 14:27:21.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:27:21.194: INFO: namespace nsdeletetest-3721 deletion completed in 6.09787244s • [SLOW TEST:18.539 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:27:21.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6774.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6774.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6774.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6774.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6774.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6774.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6774.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6774.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6774.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6774.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6774.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 100.196.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.196.100_udp@PTR;check="$$(dig +tcp +noall +answer +search 100.196.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.196.100_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6774.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6774.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6774.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6774.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6774.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6774.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6774.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6774.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6774.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6774.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6774.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 100.196.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.196.100_udp@PTR;check="$$(dig +tcp +noall +answer +search 100.196.99.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.99.196.100_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 17 14:27:27.412: INFO: Unable to read wheezy_udp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:27.415: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:27.418: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:27.420: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:27.441: INFO: Unable to read jessie_udp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:27.444: INFO: Unable to read jessie_tcp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:27.447: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:27.449: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:27.472: INFO: Lookups using dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b failed for: [wheezy_udp@dns-test-service.dns-6774.svc.cluster.local wheezy_tcp@dns-test-service.dns-6774.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local jessie_udp@dns-test-service.dns-6774.svc.cluster.local jessie_tcp@dns-test-service.dns-6774.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local] May 17 14:27:32.477: INFO: Unable to read wheezy_udp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:32.481: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:32.484: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:32.487: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:32.507: INFO: Unable to read jessie_udp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:32.509: INFO: Unable to read jessie_tcp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:32.512: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:32.515: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:32.533: INFO: Lookups using dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b failed for: [wheezy_udp@dns-test-service.dns-6774.svc.cluster.local wheezy_tcp@dns-test-service.dns-6774.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local jessie_udp@dns-test-service.dns-6774.svc.cluster.local jessie_tcp@dns-test-service.dns-6774.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local] May 17 14:27:37.476: INFO: Unable to read wheezy_udp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:37.479: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:37.482: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:37.484: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:37.500: INFO: Unable to read jessie_udp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:37.519: INFO: Unable to read jessie_tcp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:37.521: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:37.524: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:37.538: INFO: Lookups using dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b failed for: [wheezy_udp@dns-test-service.dns-6774.svc.cluster.local wheezy_tcp@dns-test-service.dns-6774.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local jessie_udp@dns-test-service.dns-6774.svc.cluster.local jessie_tcp@dns-test-service.dns-6774.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local] May 17 14:27:42.478: INFO: Unable to read wheezy_udp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:42.482: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:42.485: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:42.488: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:42.508: INFO: Unable to read jessie_udp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:42.510: INFO: Unable to read jessie_tcp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:42.513: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:42.516: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:42.533: INFO: Lookups using dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b failed for: [wheezy_udp@dns-test-service.dns-6774.svc.cluster.local wheezy_tcp@dns-test-service.dns-6774.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local jessie_udp@dns-test-service.dns-6774.svc.cluster.local jessie_tcp@dns-test-service.dns-6774.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local] May 17 14:27:47.476: INFO: Unable to read wheezy_udp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:47.480: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:47.482: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:47.485: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:47.504: INFO: Unable to read jessie_udp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:47.507: INFO: Unable to read jessie_tcp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:47.510: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:47.512: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:47.525: INFO: Lookups using dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b failed for: [wheezy_udp@dns-test-service.dns-6774.svc.cluster.local wheezy_tcp@dns-test-service.dns-6774.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local jessie_udp@dns-test-service.dns-6774.svc.cluster.local jessie_tcp@dns-test-service.dns-6774.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local] May 17 14:27:52.477: INFO: Unable to read wheezy_udp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:52.480: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:52.484: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:52.488: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:52.507: INFO: Unable to read jessie_udp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:52.511: INFO: Unable to read jessie_tcp@dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:52.515: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:52.517: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local from pod dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b: the server could not find the requested resource (get pods dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b) May 17 14:27:52.537: INFO: Lookups using dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b failed for: [wheezy_udp@dns-test-service.dns-6774.svc.cluster.local wheezy_tcp@dns-test-service.dns-6774.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local jessie_udp@dns-test-service.dns-6774.svc.cluster.local jessie_tcp@dns-test-service.dns-6774.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6774.svc.cluster.local] May 17 14:27:57.530: INFO: DNS probes using dns-6774/dns-test-653baa56-88a9-4ce6-8d23-63fdd07e436b succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:27:58.474: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6774" for this suite. May 17 14:28:04.530: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:28:04.610: INFO: namespace dns-6774 deletion completed in 6.106654605s • [SLOW TEST:43.416 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:28:04.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-9560 STEP: creating a selector STEP: Creating the service pods in kubernetes May 17 14:28:04.647: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 17 14:28:28.775: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.188:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9560 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 14:28:28.776: INFO: >>> kubeConfig: /root/.kube/config I0517 14:28:28.810795 6 log.go:172] (0xc00134b340) (0xc0028395e0) Create stream I0517 14:28:28.810829 6 log.go:172] (0xc00134b340) (0xc0028395e0) Stream added, broadcasting: 1 I0517 14:28:28.813794 6 log.go:172] (0xc00134b340) Reply frame received for 1 I0517 14:28:28.813838 6 log.go:172] (0xc00134b340) (0xc0026320a0) Create stream I0517 14:28:28.813852 6 log.go:172] (0xc00134b340) (0xc0026320a0) Stream added, broadcasting: 3 I0517 14:28:28.814780 6 log.go:172] (0xc00134b340) Reply frame received for 3 I0517 14:28:28.814810 6 log.go:172] (0xc00134b340) (0xc002839680) Create stream I0517 14:28:28.814818 6 log.go:172] (0xc00134b340) (0xc002839680) Stream added, broadcasting: 5 I0517 14:28:28.815831 6 log.go:172] (0xc00134b340) Reply frame received for 5 I0517 14:28:28.901613 6 log.go:172] (0xc00134b340) Data frame received for 3 I0517 14:28:28.901726 6 log.go:172] (0xc0026320a0) (3) Data frame handling I0517 14:28:28.901763 6 log.go:172] (0xc0026320a0) (3) Data frame sent I0517 14:28:28.901782 6 log.go:172] (0xc00134b340) Data frame received for 3 I0517 14:28:28.901809 6 log.go:172] (0xc00134b340) Data frame received for 5 I0517 14:28:28.901879 6 log.go:172] (0xc002839680) (5) Data frame handling I0517 14:28:28.901948 6 log.go:172] (0xc0026320a0) (3) Data frame handling I0517 14:28:28.903458 6 log.go:172] (0xc00134b340) Data frame received for 1 I0517 14:28:28.903488 6 log.go:172] (0xc0028395e0) (1) Data frame handling I0517 14:28:28.903505 6 log.go:172] (0xc0028395e0) (1) Data frame sent I0517 14:28:28.903541 6 log.go:172] (0xc00134b340) (0xc0028395e0) Stream removed, broadcasting: 1 I0517 14:28:28.903562 6 log.go:172] (0xc00134b340) Go away received I0517 14:28:28.903699 6 log.go:172] (0xc00134b340) (0xc0028395e0) Stream removed, broadcasting: 1 I0517 14:28:28.903715 6 log.go:172] (0xc00134b340) (0xc0026320a0) Stream removed, broadcasting: 3 I0517 14:28:28.903721 6 log.go:172] (0xc00134b340) (0xc002839680) Stream removed, broadcasting: 5 May 17 14:28:28.903: INFO: Found all expected endpoints: [netserver-0] May 17 14:28:28.907: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.44:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9560 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 14:28:28.907: INFO: >>> kubeConfig: /root/.kube/config I0517 14:28:28.936769 6 log.go:172] (0xc00134bef0) (0xc002839a40) Create stream I0517 14:28:28.936801 6 log.go:172] (0xc00134bef0) (0xc002839a40) Stream added, broadcasting: 1 I0517 14:28:28.939612 6 log.go:172] (0xc00134bef0) Reply frame received for 1 I0517 14:28:28.939653 6 log.go:172] (0xc00134bef0) (0xc001c94640) Create stream I0517 14:28:28.939664 6 log.go:172] (0xc00134bef0) (0xc001c94640) Stream added, broadcasting: 3 I0517 14:28:28.940566 6 log.go:172] (0xc00134bef0) Reply frame received for 3 I0517 14:28:28.940617 6 log.go:172] (0xc00134bef0) (0xc002632320) Create stream I0517 14:28:28.940628 6 log.go:172] (0xc00134bef0) (0xc002632320) Stream added, broadcasting: 5 I0517 14:28:28.941782 6 log.go:172] (0xc00134bef0) Reply frame received for 5 I0517 14:28:29.002202 6 log.go:172] (0xc00134bef0) Data frame received for 5 I0517 14:28:29.002242 6 log.go:172] (0xc002632320) (5) Data frame handling I0517 14:28:29.002270 6 log.go:172] (0xc00134bef0) Data frame received for 3 I0517 14:28:29.002291 6 log.go:172] (0xc001c94640) (3) Data frame handling I0517 14:28:29.002326 6 log.go:172] (0xc001c94640) (3) Data frame sent I0517 14:28:29.002520 6 log.go:172] (0xc00134bef0) Data frame received for 3 I0517 14:28:29.002549 6 log.go:172] (0xc001c94640) (3) Data frame handling I0517 14:28:29.003928 6 log.go:172] (0xc00134bef0) Data frame received for 1 I0517 14:28:29.003948 6 log.go:172] (0xc002839a40) (1) Data frame handling I0517 14:28:29.003959 6 log.go:172] (0xc002839a40) (1) Data frame sent I0517 14:28:29.003972 6 log.go:172] (0xc00134bef0) (0xc002839a40) Stream removed, broadcasting: 1 I0517 14:28:29.003999 6 log.go:172] (0xc00134bef0) Go away received I0517 14:28:29.004120 6 log.go:172] (0xc00134bef0) (0xc002839a40) Stream removed, broadcasting: 1 I0517 14:28:29.004198 6 log.go:172] (0xc00134bef0) (0xc001c94640) Stream removed, broadcasting: 3 I0517 14:28:29.004285 6 log.go:172] (0xc00134bef0) (0xc002632320) Stream removed, broadcasting: 5 May 17 14:28:29.004: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:28:29.004: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9560" for this suite. May 17 14:28:53.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:28:53.096: INFO: namespace pod-network-test-9560 deletion completed in 24.088155146s • [SLOW TEST:48.485 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:28:53.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 17 14:29:01.210: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 17 14:29:01.215: INFO: Pod pod-with-poststart-http-hook still exists May 17 14:29:03.215: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 17 14:29:03.270: INFO: Pod pod-with-poststart-http-hook still exists May 17 14:29:05.216: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 17 14:29:05.221: INFO: Pod pod-with-poststart-http-hook still exists May 17 14:29:07.215: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 17 14:29:07.220: INFO: Pod pod-with-poststart-http-hook still exists May 17 14:29:09.215: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 17 14:29:09.220: INFO: Pod pod-with-poststart-http-hook still exists May 17 14:29:11.215: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 17 14:29:11.219: INFO: Pod pod-with-poststart-http-hook still exists May 17 14:29:13.215: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 17 14:29:13.220: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:29:13.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3423" for this suite. May 17 14:29:35.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:29:35.312: INFO: namespace container-lifecycle-hook-3423 deletion completed in 22.088146427s • [SLOW TEST:42.215 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:29:35.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating service multi-endpoint-test in namespace services-9922 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9922 to expose endpoints map[] May 17 14:29:35.426: INFO: Get endpoints failed (12.669346ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 17 14:29:36.430: INFO: successfully validated that service multi-endpoint-test in namespace services-9922 exposes endpoints map[] (1.016855844s elapsed) STEP: Creating pod pod1 in namespace services-9922 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9922 to expose endpoints map[pod1:[100]] May 17 14:29:39.542: INFO: successfully validated that service multi-endpoint-test in namespace services-9922 exposes endpoints map[pod1:[100]] (3.103713142s elapsed) STEP: Creating pod pod2 in namespace services-9922 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9922 to expose endpoints map[pod1:[100] pod2:[101]] May 17 14:29:43.660: INFO: successfully validated that service multi-endpoint-test in namespace services-9922 exposes endpoints map[pod1:[100] pod2:[101]] (4.114660677s elapsed) STEP: Deleting pod pod1 in namespace services-9922 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9922 to expose endpoints map[pod2:[101]] May 17 14:29:44.684: INFO: successfully validated that service multi-endpoint-test in namespace services-9922 exposes endpoints map[pod2:[101]] (1.020835341s elapsed) STEP: Deleting pod pod2 in namespace services-9922 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9922 to expose endpoints map[] May 17 14:29:45.755: INFO: successfully validated that service multi-endpoint-test in namespace services-9922 exposes endpoints map[] (1.066469824s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:29:45.852: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9922" for this suite. May 17 14:30:07.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:30:08.001: INFO: namespace services-9922 deletion completed in 22.131531997s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92 • [SLOW TEST:32.689 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:30:08.001: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 17 14:30:12.134: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 17 14:30:17.240: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:30:17.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6562" for this suite. May 17 14:30:23.266: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:30:23.338: INFO: namespace pods-6562 deletion completed in 6.090394952s • [SLOW TEST:15.337 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:30:23.338: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0517 14:30:53.983604 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 17 14:30:53.983: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:30:53.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7864" for this suite. May 17 14:31:00.001: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:31:00.071: INFO: namespace gc-7864 deletion completed in 6.083625641s • [SLOW TEST:36.732 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:31:00.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 17 14:31:08.624: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 17 14:31:08.631: INFO: Pod pod-with-prestop-exec-hook still exists May 17 14:31:10.631: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 17 14:31:10.635: INFO: Pod pod-with-prestop-exec-hook still exists May 17 14:31:12.631: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 17 14:31:12.634: INFO: Pod pod-with-prestop-exec-hook still exists May 17 14:31:14.631: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 17 14:31:14.635: INFO: Pod pod-with-prestop-exec-hook still exists May 17 14:31:16.631: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 17 14:31:16.635: INFO: Pod pod-with-prestop-exec-hook still exists May 17 14:31:18.631: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 17 14:31:18.635: INFO: Pod pod-with-prestop-exec-hook still exists May 17 14:31:20.631: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 17 14:31:20.635: INFO: Pod pod-with-prestop-exec-hook still exists May 17 14:31:22.631: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 17 14:31:22.635: INFO: Pod pod-with-prestop-exec-hook still exists May 17 14:31:24.631: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 17 14:31:24.635: INFO: Pod pod-with-prestop-exec-hook still exists May 17 14:31:26.631: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 17 14:31:26.635: INFO: Pod pod-with-prestop-exec-hook still exists May 17 14:31:28.631: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 17 14:31:28.635: INFO: Pod pod-with-prestop-exec-hook still exists May 17 14:31:30.631: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 17 14:31:30.635: INFO: Pod pod-with-prestop-exec-hook still exists May 17 14:31:32.631: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 17 14:31:32.635: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:31:32.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6098" for this suite. May 17 14:31:54.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:31:54.774: INFO: namespace container-lifecycle-hook-6098 deletion completed in 22.126954349s • [SLOW TEST:54.703 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:31:54.774: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test hostPath mode May 17 14:31:54.852: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2047" to be "success or failure" May 17 14:31:54.873: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 20.753322ms May 17 14:31:56.876: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024095694s May 17 14:31:58.895: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042594066s May 17 14:32:00.900: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.047316329s STEP: Saw pod success May 17 14:32:00.900: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 17 14:32:00.903: INFO: Trying to get logs from node iruya-worker pod pod-host-path-test container test-container-1: STEP: delete the pod May 17 14:32:00.939: INFO: Waiting for pod pod-host-path-test to disappear May 17 14:32:00.955: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:32:00.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-2047" for this suite. May 17 14:32:06.971: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:32:07.056: INFO: namespace hostpath-2047 deletion completed in 6.097546598s • [SLOW TEST:12.282 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:32:07.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:32:07.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8919" for this suite. May 17 14:32:13.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:32:13.330: INFO: namespace kubelet-test-8919 deletion completed in 6.081561652s • [SLOW TEST:6.273 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:32:13.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 May 17 14:32:13.418: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 17 14:32:18.422: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 17 14:32:18.422: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 17 14:32:20.426: INFO: Creating deployment "test-rollover-deployment" May 17 14:32:20.436: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 17 14:32:22.442: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 17 14:32:22.454: INFO: Ensure that both replica sets have 1 created replica May 17 14:32:22.459: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 17 14:32:22.465: INFO: Updating deployment test-rollover-deployment May 17 14:32:22.465: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 17 14:32:24.627: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 17 14:32:24.640: INFO: Make sure deployment "test-rollover-deployment" is complete May 17 14:32:24.784: INFO: all replica sets need to contain the pod-template-hash label May 17 14:32:24.784: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322742, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 14:32:26.824: INFO: all replica sets need to contain the pod-template-hash label May 17 14:32:26.824: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322746, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 14:32:28.792: INFO: all replica sets need to contain the pod-template-hash label May 17 14:32:28.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322746, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 14:32:30.791: INFO: all replica sets need to contain the pod-template-hash label May 17 14:32:30.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322746, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 14:32:32.792: INFO: all replica sets need to contain the pod-template-hash label May 17 14:32:32.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322746, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 14:32:34.790: INFO: all replica sets need to contain the pod-template-hash label May 17 14:32:34.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322746, loc:(*time.Location)(0x7ead8c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63725322740, loc:(*time.Location)(0x7ead8c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} May 17 14:32:36.818: INFO: May 17 14:32:36.818: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 May 17 14:32:36.929: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-7752,SelfLink:/apis/apps/v1/namespaces/deployment-7752/deployments/test-rollover-deployment,UID:20f1049f-70c0-482e-996e-f3b08aa3e079,ResourceVersion:11412134,Generation:2,CreationTimestamp:2020-05-17 14:32:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-05-17 14:32:20 +0000 UTC 2020-05-17 14:32:20 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-05-17 14:32:36 +0000 UTC 2020-05-17 14:32:20 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} May 17 14:32:36.932: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-7752,SelfLink:/apis/apps/v1/namespaces/deployment-7752/replicasets/test-rollover-deployment-854595fc44,UID:53bdec3d-5e10-4f0c-b974-e34ce12b1d11,ResourceVersion:11412123,Generation:2,CreationTimestamp:2020-05-17 14:32:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 20f1049f-70c0-482e-996e-f3b08aa3e079 0xc003190a57 0xc003190a58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} May 17 14:32:36.932: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 17 14:32:36.932: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-7752,SelfLink:/apis/apps/v1/namespaces/deployment-7752/replicasets/test-rollover-controller,UID:2945f598-a12c-42d4-b964-d4cb05dd8328,ResourceVersion:11412132,Generation:2,CreationTimestamp:2020-05-17 14:32:13 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 20f1049f-70c0-482e-996e-f3b08aa3e079 0xc00319096f 0xc003190980}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 17 14:32:36.932: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-7752,SelfLink:/apis/apps/v1/namespaces/deployment-7752/replicasets/test-rollover-deployment-9b8b997cf,UID:598d37ba-fc72-49dd-8c6e-ae94423cbb3e,ResourceVersion:11412088,Generation:2,CreationTimestamp:2020-05-17 14:32:20 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 20f1049f-70c0-482e-996e-f3b08aa3e079 0xc003190b20 0xc003190b21}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} May 17 14:32:36.936: INFO: Pod "test-rollover-deployment-854595fc44-kkbg9" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-kkbg9,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-7752,SelfLink:/api/v1/namespaces/deployment-7752/pods/test-rollover-deployment-854595fc44-kkbg9,UID:1642ffa9-d92c-4de5-a384-22fce9eb0cac,ResourceVersion:11412099,Generation:0,CreationTimestamp:2020-05-17 14:32:22 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 53bdec3d-5e10-4f0c-b974-e34ce12b1d11 0xc003191707 0xc003191708}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-t5gm6 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-t5gm6,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-t5gm6 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc003191780} {node.kubernetes.io/unreachable Exists NoExecute 0xc0031917a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:32:22 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:32:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:32:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:32:22 +0000 UTC }],Message:,Reason:,HostIP:172.17.0.6,PodIP:10.244.2.50,StartTime:2020-05-17 14:32:22 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-05-17 14:32:25 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 containerd://eeb5ceeb001c5a9b30bd8107152fff72523716016d0ec2b824e7a16743f46fd5}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:32:36.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7752" for this suite. May 17 14:32:44.958: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:32:45.015: INFO: namespace deployment-7752 deletion completed in 8.076450801s • [SLOW TEST:31.685 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:32:45.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-configmap-jpmr STEP: Creating a pod to test atomic-volume-subpath May 17 14:32:45.218: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-jpmr" in namespace "subpath-8572" to be "success or failure" May 17 14:32:45.220: INFO: Pod "pod-subpath-test-configmap-jpmr": Phase="Pending", Reason="", readiness=false. Elapsed: 1.946906ms May 17 14:32:47.224: INFO: Pod "pod-subpath-test-configmap-jpmr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.005733479s May 17 14:32:49.237: INFO: Pod "pod-subpath-test-configmap-jpmr": Phase="Running", Reason="", readiness=true. Elapsed: 4.018852307s May 17 14:32:51.245: INFO: Pod "pod-subpath-test-configmap-jpmr": Phase="Running", Reason="", readiness=true. Elapsed: 6.026658109s May 17 14:32:53.249: INFO: Pod "pod-subpath-test-configmap-jpmr": Phase="Running", Reason="", readiness=true. Elapsed: 8.030951977s May 17 14:32:55.253: INFO: Pod "pod-subpath-test-configmap-jpmr": Phase="Running", Reason="", readiness=true. Elapsed: 10.034927575s May 17 14:32:57.257: INFO: Pod "pod-subpath-test-configmap-jpmr": Phase="Running", Reason="", readiness=true. Elapsed: 12.038880283s May 17 14:32:59.260: INFO: Pod "pod-subpath-test-configmap-jpmr": Phase="Running", Reason="", readiness=true. Elapsed: 14.041850091s May 17 14:33:01.265: INFO: Pod "pod-subpath-test-configmap-jpmr": Phase="Running", Reason="", readiness=true. Elapsed: 16.046798203s May 17 14:33:03.269: INFO: Pod "pod-subpath-test-configmap-jpmr": Phase="Running", Reason="", readiness=true. Elapsed: 18.050609021s May 17 14:33:05.310: INFO: Pod "pod-subpath-test-configmap-jpmr": Phase="Running", Reason="", readiness=true. Elapsed: 20.091681578s May 17 14:33:07.314: INFO: Pod "pod-subpath-test-configmap-jpmr": Phase="Running", Reason="", readiness=true. Elapsed: 22.095662885s May 17 14:33:09.317: INFO: Pod "pod-subpath-test-configmap-jpmr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.098914154s STEP: Saw pod success May 17 14:33:09.317: INFO: Pod "pod-subpath-test-configmap-jpmr" satisfied condition "success or failure" May 17 14:33:09.320: INFO: Trying to get logs from node iruya-worker2 pod pod-subpath-test-configmap-jpmr container test-container-subpath-configmap-jpmr: STEP: delete the pod May 17 14:33:09.448: INFO: Waiting for pod pod-subpath-test-configmap-jpmr to disappear May 17 14:33:09.473: INFO: Pod pod-subpath-test-configmap-jpmr no longer exists STEP: Deleting pod pod-subpath-test-configmap-jpmr May 17 14:33:09.473: INFO: Deleting pod "pod-subpath-test-configmap-jpmr" in namespace "subpath-8572" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:33:09.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-8572" for this suite. May 17 14:33:15.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:33:15.584: INFO: namespace subpath-8572 deletion completed in 6.105003854s • [SLOW TEST:30.569 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:33:15.584: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 May 17 14:33:15.627: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 17 14:33:15.642: INFO: Waiting for terminating namespaces to be deleted... May 17 14:33:15.645: INFO: Logging pods the kubelet thinks is on node iruya-worker before test May 17 14:33:15.664: INFO: kube-proxy-pmz4p from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 17 14:33:15.664: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:33:15.664: INFO: kindnet-gwz5g from kube-system started at 2020-03-15 18:24:55 +0000 UTC (1 container statuses recorded) May 17 14:33:15.664: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:33:15.664: INFO: Logging pods the kubelet thinks is on node iruya-worker2 before test May 17 14:33:15.670: INFO: kube-proxy-vwbcj from kube-system started at 2020-03-15 18:24:42 +0000 UTC (1 container statuses recorded) May 17 14:33:15.670: INFO: Container kube-proxy ready: true, restart count 0 May 17 14:33:15.670: INFO: kindnet-mgd8b from kube-system started at 2020-03-15 18:24:43 +0000 UTC (1 container statuses recorded) May 17 14:33:15.670: INFO: Container kindnet-cni ready: true, restart count 0 May 17 14:33:15.670: INFO: coredns-5d4dd4b4db-gm7vr from kube-system started at 2020-03-15 18:24:52 +0000 UTC (1 container statuses recorded) May 17 14:33:15.670: INFO: Container coredns ready: true, restart count 0 May 17 14:33:15.670: INFO: coredns-5d4dd4b4db-6jcgz from kube-system started at 2020-03-15 18:24:54 +0000 UTC (1 container statuses recorded) May 17 14:33:15.670: INFO: Container coredns ready: true, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: verifying the node has the label node iruya-worker STEP: verifying the node has the label node iruya-worker2 May 17 14:33:15.748: INFO: Pod coredns-5d4dd4b4db-6jcgz requesting resource cpu=100m on Node iruya-worker2 May 17 14:33:15.749: INFO: Pod coredns-5d4dd4b4db-gm7vr requesting resource cpu=100m on Node iruya-worker2 May 17 14:33:15.749: INFO: Pod kindnet-gwz5g requesting resource cpu=100m on Node iruya-worker May 17 14:33:15.749: INFO: Pod kindnet-mgd8b requesting resource cpu=100m on Node iruya-worker2 May 17 14:33:15.749: INFO: Pod kube-proxy-pmz4p requesting resource cpu=0m on Node iruya-worker May 17 14:33:15.749: INFO: Pod kube-proxy-vwbcj requesting resource cpu=0m on Node iruya-worker2 STEP: Starting Pods to consume most of the cluster CPU. STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-5706f0f9-6b85-4d56-9884-4c8e7895cf3f.160fd75ed69c6694], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1504/filler-pod-5706f0f9-6b85-4d56-9884-4c8e7895cf3f to iruya-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-5706f0f9-6b85-4d56-9884-4c8e7895cf3f.160fd75f23f3515a], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-5706f0f9-6b85-4d56-9884-4c8e7895cf3f.160fd75f7f784eca], Reason = [Created], Message = [Created container filler-pod-5706f0f9-6b85-4d56-9884-4c8e7895cf3f] STEP: Considering event: Type = [Normal], Name = [filler-pod-5706f0f9-6b85-4d56-9884-4c8e7895cf3f.160fd75f99291838], Reason = [Started], Message = [Started container filler-pod-5706f0f9-6b85-4d56-9884-4c8e7895cf3f] STEP: Considering event: Type = [Normal], Name = [filler-pod-a08b6d2e-06d3-4fec-add8-e725ecfe769e.160fd75ed64755b2], Reason = [Scheduled], Message = [Successfully assigned sched-pred-1504/filler-pod-a08b6d2e-06d3-4fec-add8-e725ecfe769e to iruya-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-a08b6d2e-06d3-4fec-add8-e725ecfe769e.160fd75f5dbd1660], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-a08b6d2e-06d3-4fec-add8-e725ecfe769e.160fd75f9dd49060], Reason = [Created], Message = [Created container filler-pod-a08b6d2e-06d3-4fec-add8-e725ecfe769e] STEP: Considering event: Type = [Normal], Name = [filler-pod-a08b6d2e-06d3-4fec-add8-e725ecfe769e.160fd75fac2387bc], Reason = [Started], Message = [Started container filler-pod-a08b6d2e-06d3-4fec-add8-e725ecfe769e] STEP: Considering event: Type = [Warning], Name = [additional-pod.160fd7603d2b1e35], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node iruya-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node iruya-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:33:22.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1504" for this suite. May 17 14:33:29.193: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:33:29.369: INFO: namespace sched-pred-1504 deletion completed in 6.439041123s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:13.785 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:33:29.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Performing setup for networking test in namespace pod-network-test-1893 STEP: creating a selector STEP: Creating the service pods in kubernetes May 17 14:33:29.427: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 17 14:33:51.560: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.53:8080/dial?request=hostName&protocol=http&host=10.244.1.199&port=8080&tries=1'] Namespace:pod-network-test-1893 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 14:33:51.560: INFO: >>> kubeConfig: /root/.kube/config I0517 14:33:51.591332 6 log.go:172] (0xc000fa4420) (0xc001c94460) Create stream I0517 14:33:51.591361 6 log.go:172] (0xc000fa4420) (0xc001c94460) Stream added, broadcasting: 1 I0517 14:33:51.593762 6 log.go:172] (0xc000fa4420) Reply frame received for 1 I0517 14:33:51.593797 6 log.go:172] (0xc000fa4420) (0xc002d16820) Create stream I0517 14:33:51.593810 6 log.go:172] (0xc000fa4420) (0xc002d16820) Stream added, broadcasting: 3 I0517 14:33:51.595144 6 log.go:172] (0xc000fa4420) Reply frame received for 3 I0517 14:33:51.595186 6 log.go:172] (0xc000fa4420) (0xc002d168c0) Create stream I0517 14:33:51.595199 6 log.go:172] (0xc000fa4420) (0xc002d168c0) Stream added, broadcasting: 5 I0517 14:33:51.596389 6 log.go:172] (0xc000fa4420) Reply frame received for 5 I0517 14:33:51.665535 6 log.go:172] (0xc000fa4420) Data frame received for 3 I0517 14:33:51.665572 6 log.go:172] (0xc002d16820) (3) Data frame handling I0517 14:33:51.665605 6 log.go:172] (0xc002d16820) (3) Data frame sent I0517 14:33:51.665967 6 log.go:172] (0xc000fa4420) Data frame received for 5 I0517 14:33:51.665993 6 log.go:172] (0xc002d168c0) (5) Data frame handling I0517 14:33:51.666019 6 log.go:172] (0xc000fa4420) Data frame received for 3 I0517 14:33:51.666034 6 log.go:172] (0xc002d16820) (3) Data frame handling I0517 14:33:51.667454 6 log.go:172] (0xc000fa4420) Data frame received for 1 I0517 14:33:51.667474 6 log.go:172] (0xc001c94460) (1) Data frame handling I0517 14:33:51.667497 6 log.go:172] (0xc001c94460) (1) Data frame sent I0517 14:33:51.667520 6 log.go:172] (0xc000fa4420) (0xc001c94460) Stream removed, broadcasting: 1 I0517 14:33:51.667534 6 log.go:172] (0xc000fa4420) Go away received I0517 14:33:51.667710 6 log.go:172] (0xc000fa4420) (0xc001c94460) Stream removed, broadcasting: 1 I0517 14:33:51.667744 6 log.go:172] (0xc000fa4420) (0xc002d16820) Stream removed, broadcasting: 3 I0517 14:33:51.667759 6 log.go:172] (0xc000fa4420) (0xc002d168c0) Stream removed, broadcasting: 5 May 17 14:33:51.667: INFO: Waiting for endpoints: map[] May 17 14:33:51.671: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.53:8080/dial?request=hostName&protocol=http&host=10.244.2.52&port=8080&tries=1'] Namespace:pod-network-test-1893 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 17 14:33:51.671: INFO: >>> kubeConfig: /root/.kube/config I0517 14:33:51.706788 6 log.go:172] (0xc000dd11e0) (0xc002d16d20) Create stream I0517 14:33:51.706893 6 log.go:172] (0xc000dd11e0) (0xc002d16d20) Stream added, broadcasting: 1 I0517 14:33:51.711328 6 log.go:172] (0xc000dd11e0) Reply frame received for 1 I0517 14:33:51.711365 6 log.go:172] (0xc000dd11e0) (0xc000447720) Create stream I0517 14:33:51.711381 6 log.go:172] (0xc000dd11e0) (0xc000447720) Stream added, broadcasting: 3 I0517 14:33:51.713584 6 log.go:172] (0xc000dd11e0) Reply frame received for 3 I0517 14:33:51.713627 6 log.go:172] (0xc000dd11e0) (0xc0004477c0) Create stream I0517 14:33:51.713637 6 log.go:172] (0xc000dd11e0) (0xc0004477c0) Stream added, broadcasting: 5 I0517 14:33:51.715157 6 log.go:172] (0xc000dd11e0) Reply frame received for 5 I0517 14:33:51.787945 6 log.go:172] (0xc000dd11e0) Data frame received for 3 I0517 14:33:51.787983 6 log.go:172] (0xc000447720) (3) Data frame handling I0517 14:33:51.788005 6 log.go:172] (0xc000447720) (3) Data frame sent I0517 14:33:51.788747 6 log.go:172] (0xc000dd11e0) Data frame received for 5 I0517 14:33:51.788790 6 log.go:172] (0xc0004477c0) (5) Data frame handling I0517 14:33:51.788805 6 log.go:172] (0xc000dd11e0) Data frame received for 3 I0517 14:33:51.788833 6 log.go:172] (0xc000447720) (3) Data frame handling I0517 14:33:51.790772 6 log.go:172] (0xc000dd11e0) Data frame received for 1 I0517 14:33:51.790809 6 log.go:172] (0xc002d16d20) (1) Data frame handling I0517 14:33:51.790837 6 log.go:172] (0xc002d16d20) (1) Data frame sent I0517 14:33:51.790858 6 log.go:172] (0xc000dd11e0) (0xc002d16d20) Stream removed, broadcasting: 1 I0517 14:33:51.790877 6 log.go:172] (0xc000dd11e0) Go away received I0517 14:33:51.790971 6 log.go:172] (0xc000dd11e0) (0xc002d16d20) Stream removed, broadcasting: 1 I0517 14:33:51.790990 6 log.go:172] (0xc000dd11e0) (0xc000447720) Stream removed, broadcasting: 3 I0517 14:33:51.790998 6 log.go:172] (0xc000dd11e0) (0xc0004477c0) Stream removed, broadcasting: 5 May 17 14:33:51.791: INFO: Waiting for endpoints: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:33:51.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1893" for this suite. May 17 14:34:15.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:34:15.884: INFO: namespace pod-network-test-1893 deletion completed in 24.089152885s • [SLOW TEST:46.515 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:34:15.885: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin May 17 14:34:15.967: INFO: Waiting up to 5m0s for pod "downwardapi-volume-799f7565-2932-4142-a402-e01fb8a281dc" in namespace "projected-8008" to be "success or failure" May 17 14:34:15.971: INFO: Pod "downwardapi-volume-799f7565-2932-4142-a402-e01fb8a281dc": Phase="Pending", Reason="", readiness=false. Elapsed: 3.990603ms May 17 14:34:17.976: INFO: Pod "downwardapi-volume-799f7565-2932-4142-a402-e01fb8a281dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008188458s May 17 14:34:19.980: INFO: Pod "downwardapi-volume-799f7565-2932-4142-a402-e01fb8a281dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012630796s STEP: Saw pod success May 17 14:34:19.980: INFO: Pod "downwardapi-volume-799f7565-2932-4142-a402-e01fb8a281dc" satisfied condition "success or failure" May 17 14:34:19.984: INFO: Trying to get logs from node iruya-worker pod downwardapi-volume-799f7565-2932-4142-a402-e01fb8a281dc container client-container: STEP: delete the pod May 17 14:34:20.038: INFO: Waiting for pod downwardapi-volume-799f7565-2932-4142-a402-e01fb8a281dc to disappear May 17 14:34:20.055: INFO: Pod downwardapi-volume-799f7565-2932-4142-a402-e01fb8a281dc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:34:20.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8008" for this suite. May 17 14:34:26.079: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:34:26.171: INFO: namespace projected-8008 deletion completed in 6.112657235s • [SLOW TEST:10.287 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:34:26.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine May 17 14:34:26.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6876' May 17 14:34:28.817: INFO: stderr: "" May 17 14:34:28.817: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 May 17 14:34:28.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6876' May 17 14:34:42.241: INFO: stderr: "" May 17 14:34:42.241: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:34:42.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6876" for this suite. May 17 14:34:48.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:34:48.372: INFO: namespace kubectl-6876 deletion completed in 6.127852462s • [SLOW TEST:22.200 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:34:48.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-905e4960-e32c-4589-904c-6a7a6f609ce9 STEP: Creating a pod to test consume configMaps May 17 14:34:48.487: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-cd668593-27f4-47b6-885d-6a51dd951beb" in namespace "projected-3529" to be "success or failure" May 17 14:34:48.503: INFO: Pod "pod-projected-configmaps-cd668593-27f4-47b6-885d-6a51dd951beb": Phase="Pending", Reason="", readiness=false. Elapsed: 16.460013ms May 17 14:34:50.526: INFO: Pod "pod-projected-configmaps-cd668593-27f4-47b6-885d-6a51dd951beb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039413679s May 17 14:34:52.530: INFO: Pod "pod-projected-configmaps-cd668593-27f4-47b6-885d-6a51dd951beb": Phase="Running", Reason="", readiness=true. Elapsed: 4.04372914s May 17 14:34:54.535: INFO: Pod "pod-projected-configmaps-cd668593-27f4-47b6-885d-6a51dd951beb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048214816s STEP: Saw pod success May 17 14:34:54.535: INFO: Pod "pod-projected-configmaps-cd668593-27f4-47b6-885d-6a51dd951beb" satisfied condition "success or failure" May 17 14:34:54.538: INFO: Trying to get logs from node iruya-worker pod pod-projected-configmaps-cd668593-27f4-47b6-885d-6a51dd951beb container projected-configmap-volume-test: STEP: delete the pod May 17 14:34:54.624: INFO: Waiting for pod pod-projected-configmaps-cd668593-27f4-47b6-885d-6a51dd951beb to disappear May 17 14:34:54.646: INFO: Pod pod-projected-configmaps-cd668593-27f4-47b6-885d-6a51dd951beb no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:34:54.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3529" for this suite. May 17 14:35:00.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:35:00.733: INFO: namespace projected-3529 deletion completed in 6.08354134s • [SLOW TEST:12.361 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:35:00.733: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-7984b205-9ba2-4205-8785-d3665a52cf82 STEP: Creating a pod to test consume secrets May 17 14:35:00.802: INFO: Waiting up to 5m0s for pod "pod-secrets-b4ed3606-8d46-4a77-8c2e-382387e6cc9e" in namespace "secrets-4137" to be "success or failure" May 17 14:35:00.817: INFO: Pod "pod-secrets-b4ed3606-8d46-4a77-8c2e-382387e6cc9e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.652952ms May 17 14:35:02.821: INFO: Pod "pod-secrets-b4ed3606-8d46-4a77-8c2e-382387e6cc9e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018994074s May 17 14:35:04.825: INFO: Pod "pod-secrets-b4ed3606-8d46-4a77-8c2e-382387e6cc9e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.022563448s STEP: Saw pod success May 17 14:35:04.825: INFO: Pod "pod-secrets-b4ed3606-8d46-4a77-8c2e-382387e6cc9e" satisfied condition "success or failure" May 17 14:35:04.828: INFO: Trying to get logs from node iruya-worker2 pod pod-secrets-b4ed3606-8d46-4a77-8c2e-382387e6cc9e container secret-env-test: STEP: delete the pod May 17 14:35:04.842: INFO: Waiting for pod pod-secrets-b4ed3606-8d46-4a77-8c2e-382387e6cc9e to disappear May 17 14:35:04.879: INFO: Pod pod-secrets-b4ed3606-8d46-4a77-8c2e-382387e6cc9e no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:35:04.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4137" for this suite. May 17 14:35:10.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:35:10.955: INFO: namespace secrets-4137 deletion completed in 6.072500772s • [SLOW TEST:10.222 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:35:10.955: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-131d950a-9507-4469-9422-77c0cd5e75ae STEP: Creating a pod to test consume configMaps May 17 14:35:11.055: INFO: Waiting up to 5m0s for pod "pod-configmaps-36933576-26c0-4d7c-9d63-54ea29fe6f16" in namespace "configmap-3273" to be "success or failure" May 17 14:35:11.069: INFO: Pod "pod-configmaps-36933576-26c0-4d7c-9d63-54ea29fe6f16": Phase="Pending", Reason="", readiness=false. Elapsed: 13.068701ms May 17 14:35:13.072: INFO: Pod "pod-configmaps-36933576-26c0-4d7c-9d63-54ea29fe6f16": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017001477s May 17 14:35:15.077: INFO: Pod "pod-configmaps-36933576-26c0-4d7c-9d63-54ea29fe6f16": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02148483s STEP: Saw pod success May 17 14:35:15.077: INFO: Pod "pod-configmaps-36933576-26c0-4d7c-9d63-54ea29fe6f16" satisfied condition "success or failure" May 17 14:35:15.080: INFO: Trying to get logs from node iruya-worker pod pod-configmaps-36933576-26c0-4d7c-9d63-54ea29fe6f16 container configmap-volume-test: STEP: delete the pod May 17 14:35:15.112: INFO: Waiting for pod pod-configmaps-36933576-26c0-4d7c-9d63-54ea29fe6f16 to disappear May 17 14:35:15.173: INFO: Pod pod-configmaps-36933576-26c0-4d7c-9d63-54ea29fe6f16 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:35:15.173: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3273" for this suite. May 17 14:35:21.222: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:35:21.300: INFO: namespace configmap-3273 deletion completed in 6.122201345s • [SLOW TEST:10.344 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:35:21.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs May 17 14:35:21.384: INFO: Waiting up to 5m0s for pod "pod-9a0c876b-1a7d-4900-8783-46c3ab1b39e8" in namespace "emptydir-2635" to be "success or failure" May 17 14:35:21.389: INFO: Pod "pod-9a0c876b-1a7d-4900-8783-46c3ab1b39e8": Phase="Pending", Reason="", readiness=false. Elapsed: 5.174939ms May 17 14:35:23.400: INFO: Pod "pod-9a0c876b-1a7d-4900-8783-46c3ab1b39e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015986086s May 17 14:35:25.405: INFO: Pod "pod-9a0c876b-1a7d-4900-8783-46c3ab1b39e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020858322s STEP: Saw pod success May 17 14:35:25.405: INFO: Pod "pod-9a0c876b-1a7d-4900-8783-46c3ab1b39e8" satisfied condition "success or failure" May 17 14:35:25.408: INFO: Trying to get logs from node iruya-worker pod pod-9a0c876b-1a7d-4900-8783-46c3ab1b39e8 container test-container: STEP: delete the pod May 17 14:35:25.473: INFO: Waiting for pod pod-9a0c876b-1a7d-4900-8783-46c3ab1b39e8 to disappear May 17 14:35:25.526: INFO: Pod pod-9a0c876b-1a7d-4900-8783-46c3ab1b39e8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:35:25.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2635" for this suite. May 17 14:35:31.542: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:35:31.624: INFO: namespace emptydir-2635 deletion completed in 6.094072338s • [SLOW TEST:10.323 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:35:31.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-4724 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-4724 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4724 May 17 14:35:31.711: INFO: Found 0 stateful pods, waiting for 1 May 17 14:35:41.715: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 17 14:35:41.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4724 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 17 14:35:41.992: INFO: stderr: "I0517 14:35:41.856428 2957 log.go:172] (0xc0009342c0) (0xc0008fa5a0) Create stream\nI0517 14:35:41.856498 2957 log.go:172] (0xc0009342c0) (0xc0008fa5a0) Stream added, broadcasting: 1\nI0517 14:35:41.859139 2957 log.go:172] (0xc0009342c0) Reply frame received for 1\nI0517 14:35:41.859198 2957 log.go:172] (0xc0009342c0) (0xc0008fa640) Create stream\nI0517 14:35:41.859211 2957 log.go:172] (0xc0009342c0) (0xc0008fa640) Stream added, broadcasting: 3\nI0517 14:35:41.860392 2957 log.go:172] (0xc0009342c0) Reply frame received for 3\nI0517 14:35:41.860427 2957 log.go:172] (0xc0009342c0) (0xc0008fa6e0) Create stream\nI0517 14:35:41.860438 2957 log.go:172] (0xc0009342c0) (0xc0008fa6e0) Stream added, broadcasting: 5\nI0517 14:35:41.861630 2957 log.go:172] (0xc0009342c0) Reply frame received for 5\nI0517 14:35:41.950375 2957 log.go:172] (0xc0009342c0) Data frame received for 5\nI0517 14:35:41.950413 2957 log.go:172] (0xc0008fa6e0) (5) Data frame handling\nI0517 14:35:41.950453 2957 log.go:172] (0xc0008fa6e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0517 14:35:41.984105 2957 log.go:172] (0xc0009342c0) Data frame received for 5\nI0517 14:35:41.984161 2957 log.go:172] (0xc0008fa6e0) (5) Data frame handling\nI0517 14:35:41.984225 2957 log.go:172] (0xc0009342c0) Data frame received for 3\nI0517 14:35:41.984273 2957 log.go:172] (0xc0008fa640) (3) Data frame handling\nI0517 14:35:41.984305 2957 log.go:172] (0xc0008fa640) (3) Data frame sent\nI0517 14:35:41.984325 2957 log.go:172] (0xc0009342c0) Data frame received for 3\nI0517 14:35:41.984334 2957 log.go:172] (0xc0008fa640) (3) Data frame handling\nI0517 14:35:41.986746 2957 log.go:172] (0xc0009342c0) Data frame received for 1\nI0517 14:35:41.986761 2957 log.go:172] (0xc0008fa5a0) (1) Data frame handling\nI0517 14:35:41.986768 2957 log.go:172] (0xc0008fa5a0) (1) Data frame sent\nI0517 14:35:41.986777 2957 log.go:172] (0xc0009342c0) (0xc0008fa5a0) Stream removed, broadcasting: 1\nI0517 14:35:41.986877 2957 log.go:172] (0xc0009342c0) Go away received\nI0517 14:35:41.987052 2957 log.go:172] (0xc0009342c0) (0xc0008fa5a0) Stream removed, broadcasting: 1\nI0517 14:35:41.987063 2957 log.go:172] (0xc0009342c0) (0xc0008fa640) Stream removed, broadcasting: 3\nI0517 14:35:41.987068 2957 log.go:172] (0xc0009342c0) (0xc0008fa6e0) Stream removed, broadcasting: 5\n" May 17 14:35:41.992: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 17 14:35:41.992: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 17 14:35:41.996: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 17 14:35:52.010: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 17 14:35:52.010: INFO: Waiting for statefulset status.replicas updated to 0 May 17 14:35:52.030: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:35:52.030: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:42 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC }] May 17 14:35:52.030: INFO: May 17 14:35:52.030: INFO: StatefulSet ss has not reached scale 3, at 1 May 17 14:35:53.054: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.997170219s May 17 14:35:54.198: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.972884579s May 17 14:35:55.203: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.828767824s May 17 14:35:56.228: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.824344254s May 17 14:35:57.232: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.798704738s May 17 14:35:58.258: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.795203176s May 17 14:35:59.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.768703825s May 17 14:36:00.283: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.76328261s May 17 14:36:01.287: INFO: Verifying statefulset ss doesn't scale past 3 for another 744.375533ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4724 May 17 14:36:02.294: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4724 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 17 14:36:02.542: INFO: stderr: "I0517 14:36:02.448625 2977 log.go:172] (0xc0009d4630) (0xc000694a00) Create stream\nI0517 14:36:02.448685 2977 log.go:172] (0xc0009d4630) (0xc000694a00) Stream added, broadcasting: 1\nI0517 14:36:02.451254 2977 log.go:172] (0xc0009d4630) Reply frame received for 1\nI0517 14:36:02.451306 2977 log.go:172] (0xc0009d4630) (0xc000694aa0) Create stream\nI0517 14:36:02.451334 2977 log.go:172] (0xc0009d4630) (0xc000694aa0) Stream added, broadcasting: 3\nI0517 14:36:02.452966 2977 log.go:172] (0xc0009d4630) Reply frame received for 3\nI0517 14:36:02.453001 2977 log.go:172] (0xc0009d4630) (0xc000694280) Create stream\nI0517 14:36:02.453010 2977 log.go:172] (0xc0009d4630) (0xc000694280) Stream added, broadcasting: 5\nI0517 14:36:02.454064 2977 log.go:172] (0xc0009d4630) Reply frame received for 5\nI0517 14:36:02.534024 2977 log.go:172] (0xc0009d4630) Data frame received for 5\nI0517 14:36:02.534066 2977 log.go:172] (0xc000694280) (5) Data frame handling\nI0517 14:36:02.534079 2977 log.go:172] (0xc000694280) (5) Data frame sent\nI0517 14:36:02.534089 2977 log.go:172] (0xc0009d4630) Data frame received for 5\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0517 14:36:02.534099 2977 log.go:172] (0xc000694280) (5) Data frame handling\nI0517 14:36:02.534154 2977 log.go:172] (0xc0009d4630) Data frame received for 3\nI0517 14:36:02.534186 2977 log.go:172] (0xc000694aa0) (3) Data frame handling\nI0517 14:36:02.534205 2977 log.go:172] (0xc000694aa0) (3) Data frame sent\nI0517 14:36:02.534219 2977 log.go:172] (0xc0009d4630) Data frame received for 3\nI0517 14:36:02.534228 2977 log.go:172] (0xc000694aa0) (3) Data frame handling\nI0517 14:36:02.535375 2977 log.go:172] (0xc0009d4630) Data frame received for 1\nI0517 14:36:02.535419 2977 log.go:172] (0xc000694a00) (1) Data frame handling\nI0517 14:36:02.535452 2977 log.go:172] (0xc000694a00) (1) Data frame sent\nI0517 14:36:02.535481 2977 log.go:172] (0xc0009d4630) (0xc000694a00) Stream removed, broadcasting: 1\nI0517 14:36:02.535499 2977 log.go:172] (0xc0009d4630) Go away received\nI0517 14:36:02.535978 2977 log.go:172] (0xc0009d4630) (0xc000694a00) Stream removed, broadcasting: 1\nI0517 14:36:02.536001 2977 log.go:172] (0xc0009d4630) (0xc000694aa0) Stream removed, broadcasting: 3\nI0517 14:36:02.536012 2977 log.go:172] (0xc0009d4630) (0xc000694280) Stream removed, broadcasting: 5\n" May 17 14:36:02.542: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 17 14:36:02.542: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 17 14:36:02.542: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4724 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 17 14:36:02.748: INFO: stderr: "I0517 14:36:02.671189 2999 log.go:172] (0xc000a5e370) (0xc0006fc820) Create stream\nI0517 14:36:02.671252 2999 log.go:172] (0xc000a5e370) (0xc0006fc820) Stream added, broadcasting: 1\nI0517 14:36:02.676148 2999 log.go:172] (0xc000a5e370) Reply frame received for 1\nI0517 14:36:02.676214 2999 log.go:172] (0xc000a5e370) (0xc00061e460) Create stream\nI0517 14:36:02.676234 2999 log.go:172] (0xc000a5e370) (0xc00061e460) Stream added, broadcasting: 3\nI0517 14:36:02.677650 2999 log.go:172] (0xc000a5e370) Reply frame received for 3\nI0517 14:36:02.677693 2999 log.go:172] (0xc000a5e370) (0xc0006fc000) Create stream\nI0517 14:36:02.677705 2999 log.go:172] (0xc000a5e370) (0xc0006fc000) Stream added, broadcasting: 5\nI0517 14:36:02.678616 2999 log.go:172] (0xc000a5e370) Reply frame received for 5\nI0517 14:36:02.742273 2999 log.go:172] (0xc000a5e370) Data frame received for 5\nI0517 14:36:02.742326 2999 log.go:172] (0xc0006fc000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0517 14:36:02.742358 2999 log.go:172] (0xc000a5e370) Data frame received for 3\nI0517 14:36:02.742396 2999 log.go:172] (0xc00061e460) (3) Data frame handling\nI0517 14:36:02.742409 2999 log.go:172] (0xc00061e460) (3) Data frame sent\nI0517 14:36:02.742446 2999 log.go:172] (0xc0006fc000) (5) Data frame sent\nI0517 14:36:02.742483 2999 log.go:172] (0xc000a5e370) Data frame received for 5\nI0517 14:36:02.742495 2999 log.go:172] (0xc0006fc000) (5) Data frame handling\nI0517 14:36:02.742516 2999 log.go:172] (0xc000a5e370) Data frame received for 3\nI0517 14:36:02.742528 2999 log.go:172] (0xc00061e460) (3) Data frame handling\nI0517 14:36:02.743874 2999 log.go:172] (0xc000a5e370) Data frame received for 1\nI0517 14:36:02.743893 2999 log.go:172] (0xc0006fc820) (1) Data frame handling\nI0517 14:36:02.743905 2999 log.go:172] (0xc0006fc820) (1) Data frame sent\nI0517 14:36:02.743916 2999 log.go:172] (0xc000a5e370) (0xc0006fc820) Stream removed, broadcasting: 1\nI0517 14:36:02.743939 2999 log.go:172] (0xc000a5e370) Go away received\nI0517 14:36:02.744251 2999 log.go:172] (0xc000a5e370) (0xc0006fc820) Stream removed, broadcasting: 1\nI0517 14:36:02.744267 2999 log.go:172] (0xc000a5e370) (0xc00061e460) Stream removed, broadcasting: 3\nI0517 14:36:02.744275 2999 log.go:172] (0xc000a5e370) (0xc0006fc000) Stream removed, broadcasting: 5\n" May 17 14:36:02.748: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 17 14:36:02.748: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 17 14:36:02.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4724 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' May 17 14:36:02.955: INFO: stderr: "I0517 14:36:02.872968 3021 log.go:172] (0xc000116fd0) (0xc0005b4a00) Create stream\nI0517 14:36:02.873031 3021 log.go:172] (0xc000116fd0) (0xc0005b4a00) Stream added, broadcasting: 1\nI0517 14:36:02.877002 3021 log.go:172] (0xc000116fd0) Reply frame received for 1\nI0517 14:36:02.877047 3021 log.go:172] (0xc000116fd0) (0xc0005b4140) Create stream\nI0517 14:36:02.877063 3021 log.go:172] (0xc000116fd0) (0xc0005b4140) Stream added, broadcasting: 3\nI0517 14:36:02.878461 3021 log.go:172] (0xc000116fd0) Reply frame received for 3\nI0517 14:36:02.878511 3021 log.go:172] (0xc000116fd0) (0xc000340000) Create stream\nI0517 14:36:02.878527 3021 log.go:172] (0xc000116fd0) (0xc000340000) Stream added, broadcasting: 5\nI0517 14:36:02.879401 3021 log.go:172] (0xc000116fd0) Reply frame received for 5\nI0517 14:36:02.947974 3021 log.go:172] (0xc000116fd0) Data frame received for 5\nI0517 14:36:02.948032 3021 log.go:172] (0xc000340000) (5) Data frame handling\nI0517 14:36:02.948055 3021 log.go:172] (0xc000340000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0517 14:36:02.948083 3021 log.go:172] (0xc000116fd0) Data frame received for 3\nI0517 14:36:02.948100 3021 log.go:172] (0xc0005b4140) (3) Data frame handling\nI0517 14:36:02.948130 3021 log.go:172] (0xc0005b4140) (3) Data frame sent\nI0517 14:36:02.948149 3021 log.go:172] (0xc000116fd0) Data frame received for 3\nI0517 14:36:02.948166 3021 log.go:172] (0xc0005b4140) (3) Data frame handling\nI0517 14:36:02.948199 3021 log.go:172] (0xc000116fd0) Data frame received for 5\nI0517 14:36:02.948218 3021 log.go:172] (0xc000340000) (5) Data frame handling\nI0517 14:36:02.949997 3021 log.go:172] (0xc000116fd0) Data frame received for 1\nI0517 14:36:02.950024 3021 log.go:172] (0xc0005b4a00) (1) Data frame handling\nI0517 14:36:02.950033 3021 log.go:172] (0xc0005b4a00) (1) Data frame sent\nI0517 14:36:02.950048 3021 log.go:172] (0xc000116fd0) (0xc0005b4a00) Stream removed, broadcasting: 1\nI0517 14:36:02.950069 3021 log.go:172] (0xc000116fd0) Go away received\nI0517 14:36:02.950544 3021 log.go:172] (0xc000116fd0) (0xc0005b4a00) Stream removed, broadcasting: 1\nI0517 14:36:02.950572 3021 log.go:172] (0xc000116fd0) (0xc0005b4140) Stream removed, broadcasting: 3\nI0517 14:36:02.950591 3021 log.go:172] (0xc000116fd0) (0xc000340000) Stream removed, broadcasting: 5\n" May 17 14:36:02.955: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" May 17 14:36:02.955: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' May 17 14:36:02.959: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 17 14:36:12.964: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 17 14:36:12.964: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 17 14:36:12.964: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 17 14:36:12.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4724 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 17 14:36:13.202: INFO: stderr: "I0517 14:36:13.106400 3042 log.go:172] (0xc00094c4d0) (0xc000478780) Create stream\nI0517 14:36:13.106460 3042 log.go:172] (0xc00094c4d0) (0xc000478780) Stream added, broadcasting: 1\nI0517 14:36:13.110673 3042 log.go:172] (0xc00094c4d0) Reply frame received for 1\nI0517 14:36:13.110715 3042 log.go:172] (0xc00094c4d0) (0xc0005bc640) Create stream\nI0517 14:36:13.110730 3042 log.go:172] (0xc00094c4d0) (0xc0005bc640) Stream added, broadcasting: 3\nI0517 14:36:13.111591 3042 log.go:172] (0xc00094c4d0) Reply frame received for 3\nI0517 14:36:13.111620 3042 log.go:172] (0xc00094c4d0) (0xc0004780a0) Create stream\nI0517 14:36:13.111630 3042 log.go:172] (0xc00094c4d0) (0xc0004780a0) Stream added, broadcasting: 5\nI0517 14:36:13.112616 3042 log.go:172] (0xc00094c4d0) Reply frame received for 5\nI0517 14:36:13.194390 3042 log.go:172] (0xc00094c4d0) Data frame received for 5\nI0517 14:36:13.194432 3042 log.go:172] (0xc0004780a0) (5) Data frame handling\nI0517 14:36:13.194451 3042 log.go:172] (0xc0004780a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0517 14:36:13.194487 3042 log.go:172] (0xc00094c4d0) Data frame received for 3\nI0517 14:36:13.194516 3042 log.go:172] (0xc0005bc640) (3) Data frame handling\nI0517 14:36:13.194542 3042 log.go:172] (0xc0005bc640) (3) Data frame sent\nI0517 14:36:13.194581 3042 log.go:172] (0xc00094c4d0) Data frame received for 3\nI0517 14:36:13.194603 3042 log.go:172] (0xc0005bc640) (3) Data frame handling\nI0517 14:36:13.194631 3042 log.go:172] (0xc00094c4d0) Data frame received for 5\nI0517 14:36:13.194649 3042 log.go:172] (0xc0004780a0) (5) Data frame handling\nI0517 14:36:13.196551 3042 log.go:172] (0xc00094c4d0) Data frame received for 1\nI0517 14:36:13.196607 3042 log.go:172] (0xc000478780) (1) Data frame handling\nI0517 14:36:13.196625 3042 log.go:172] (0xc000478780) (1) Data frame sent\nI0517 14:36:13.196637 3042 log.go:172] (0xc00094c4d0) (0xc000478780) Stream removed, broadcasting: 1\nI0517 14:36:13.196650 3042 log.go:172] (0xc00094c4d0) Go away received\nI0517 14:36:13.197050 3042 log.go:172] (0xc00094c4d0) (0xc000478780) Stream removed, broadcasting: 1\nI0517 14:36:13.197068 3042 log.go:172] (0xc00094c4d0) (0xc0005bc640) Stream removed, broadcasting: 3\nI0517 14:36:13.197073 3042 log.go:172] (0xc00094c4d0) (0xc0004780a0) Stream removed, broadcasting: 5\n" May 17 14:36:13.202: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 17 14:36:13.202: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 17 14:36:13.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4724 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 17 14:36:13.433: INFO: stderr: "I0517 14:36:13.322904 3062 log.go:172] (0xc0006d6630) (0xc000616a00) Create stream\nI0517 14:36:13.323047 3062 log.go:172] (0xc0006d6630) (0xc000616a00) Stream added, broadcasting: 1\nI0517 14:36:13.327263 3062 log.go:172] (0xc0006d6630) Reply frame received for 1\nI0517 14:36:13.327306 3062 log.go:172] (0xc0006d6630) (0xc00035e140) Create stream\nI0517 14:36:13.327317 3062 log.go:172] (0xc0006d6630) (0xc00035e140) Stream added, broadcasting: 3\nI0517 14:36:13.328475 3062 log.go:172] (0xc0006d6630) Reply frame received for 3\nI0517 14:36:13.328499 3062 log.go:172] (0xc0006d6630) (0xc000616aa0) Create stream\nI0517 14:36:13.328525 3062 log.go:172] (0xc0006d6630) (0xc000616aa0) Stream added, broadcasting: 5\nI0517 14:36:13.329776 3062 log.go:172] (0xc0006d6630) Reply frame received for 5\nI0517 14:36:13.391641 3062 log.go:172] (0xc0006d6630) Data frame received for 5\nI0517 14:36:13.391668 3062 log.go:172] (0xc000616aa0) (5) Data frame handling\nI0517 14:36:13.391685 3062 log.go:172] (0xc000616aa0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0517 14:36:13.425486 3062 log.go:172] (0xc0006d6630) Data frame received for 5\nI0517 14:36:13.425539 3062 log.go:172] (0xc000616aa0) (5) Data frame handling\nI0517 14:36:13.425570 3062 log.go:172] (0xc0006d6630) Data frame received for 3\nI0517 14:36:13.425584 3062 log.go:172] (0xc00035e140) (3) Data frame handling\nI0517 14:36:13.425601 3062 log.go:172] (0xc00035e140) (3) Data frame sent\nI0517 14:36:13.425626 3062 log.go:172] (0xc0006d6630) Data frame received for 3\nI0517 14:36:13.425639 3062 log.go:172] (0xc00035e140) (3) Data frame handling\nI0517 14:36:13.427393 3062 log.go:172] (0xc0006d6630) Data frame received for 1\nI0517 14:36:13.427430 3062 log.go:172] (0xc000616a00) (1) Data frame handling\nI0517 14:36:13.427450 3062 log.go:172] (0xc000616a00) (1) Data frame sent\nI0517 14:36:13.427507 3062 log.go:172] (0xc0006d6630) (0xc000616a00) Stream removed, broadcasting: 1\nI0517 14:36:13.427919 3062 log.go:172] (0xc0006d6630) (0xc000616a00) Stream removed, broadcasting: 1\nI0517 14:36:13.427938 3062 log.go:172] (0xc0006d6630) (0xc00035e140) Stream removed, broadcasting: 3\nI0517 14:36:13.428105 3062 log.go:172] (0xc0006d6630) (0xc000616aa0) Stream removed, broadcasting: 5\n" May 17 14:36:13.434: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 17 14:36:13.434: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 17 14:36:13.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4724 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' May 17 14:36:13.657: INFO: stderr: "I0517 14:36:13.563279 3082 log.go:172] (0xc0009b4630) (0xc000654aa0) Create stream\nI0517 14:36:13.563330 3082 log.go:172] (0xc0009b4630) (0xc000654aa0) Stream added, broadcasting: 1\nI0517 14:36:13.565348 3082 log.go:172] (0xc0009b4630) Reply frame received for 1\nI0517 14:36:13.565394 3082 log.go:172] (0xc0009b4630) (0xc00095c000) Create stream\nI0517 14:36:13.565407 3082 log.go:172] (0xc0009b4630) (0xc00095c000) Stream added, broadcasting: 3\nI0517 14:36:13.566171 3082 log.go:172] (0xc0009b4630) Reply frame received for 3\nI0517 14:36:13.566211 3082 log.go:172] (0xc0009b4630) (0xc000654b40) Create stream\nI0517 14:36:13.566227 3082 log.go:172] (0xc0009b4630) (0xc000654b40) Stream added, broadcasting: 5\nI0517 14:36:13.567090 3082 log.go:172] (0xc0009b4630) Reply frame received for 5\nI0517 14:36:13.620711 3082 log.go:172] (0xc0009b4630) Data frame received for 5\nI0517 14:36:13.620745 3082 log.go:172] (0xc000654b40) (5) Data frame handling\nI0517 14:36:13.620786 3082 log.go:172] (0xc000654b40) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0517 14:36:13.649982 3082 log.go:172] (0xc0009b4630) Data frame received for 5\nI0517 14:36:13.650000 3082 log.go:172] (0xc000654b40) (5) Data frame handling\nI0517 14:36:13.650016 3082 log.go:172] (0xc0009b4630) Data frame received for 3\nI0517 14:36:13.650021 3082 log.go:172] (0xc00095c000) (3) Data frame handling\nI0517 14:36:13.650027 3082 log.go:172] (0xc00095c000) (3) Data frame sent\nI0517 14:36:13.650032 3082 log.go:172] (0xc0009b4630) Data frame received for 3\nI0517 14:36:13.650036 3082 log.go:172] (0xc00095c000) (3) Data frame handling\nI0517 14:36:13.651874 3082 log.go:172] (0xc0009b4630) Data frame received for 1\nI0517 14:36:13.651904 3082 log.go:172] (0xc000654aa0) (1) Data frame handling\nI0517 14:36:13.651918 3082 log.go:172] (0xc000654aa0) (1) Data frame sent\nI0517 14:36:13.651932 3082 log.go:172] (0xc0009b4630) (0xc000654aa0) Stream removed, broadcasting: 1\nI0517 14:36:13.652272 3082 log.go:172] (0xc0009b4630) (0xc000654aa0) Stream removed, broadcasting: 1\nI0517 14:36:13.652298 3082 log.go:172] (0xc0009b4630) (0xc00095c000) Stream removed, broadcasting: 3\nI0517 14:36:13.652310 3082 log.go:172] (0xc0009b4630) (0xc000654b40) Stream removed, broadcasting: 5\n" May 17 14:36:13.658: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" May 17 14:36:13.658: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' May 17 14:36:13.658: INFO: Waiting for statefulset status.replicas updated to 0 May 17 14:36:13.661: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 17 14:36:23.670: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 17 14:36:23.670: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 17 14:36:23.670: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 17 14:36:23.681: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:36:23.681: INFO: ss-0 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC }] May 17 14:36:23.681: INFO: ss-1 iruya-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:23.681: INFO: ss-2 iruya-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:23.682: INFO: May 17 14:36:23.682: INFO: StatefulSet ss has not reached scale 0, at 3 May 17 14:36:24.720: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:36:24.720: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC }] May 17 14:36:24.720: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:24.720: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:24.720: INFO: May 17 14:36:24.720: INFO: StatefulSet ss has not reached scale 0, at 3 May 17 14:36:25.726: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:36:25.726: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC }] May 17 14:36:25.726: INFO: ss-1 iruya-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:25.726: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:25.726: INFO: May 17 14:36:25.726: INFO: StatefulSet ss has not reached scale 0, at 3 May 17 14:36:26.731: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:36:26.731: INFO: ss-0 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC }] May 17 14:36:26.731: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:26.731: INFO: ss-2 iruya-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:26.731: INFO: May 17 14:36:26.731: INFO: StatefulSet ss has not reached scale 0, at 3 May 17 14:36:27.737: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:36:27.738: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC }] May 17 14:36:27.738: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:27.738: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:27.738: INFO: May 17 14:36:27.738: INFO: StatefulSet ss has not reached scale 0, at 3 May 17 14:36:28.747: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:36:28.747: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC }] May 17 14:36:28.747: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:28.747: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:28.747: INFO: May 17 14:36:28.747: INFO: StatefulSet ss has not reached scale 0, at 3 May 17 14:36:29.753: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:36:29.753: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC }] May 17 14:36:29.753: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:29.753: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:29.753: INFO: May 17 14:36:29.753: INFO: StatefulSet ss has not reached scale 0, at 3 May 17 14:36:30.757: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:36:30.757: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC }] May 17 14:36:30.757: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:30.757: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:30.757: INFO: May 17 14:36:30.757: INFO: StatefulSet ss has not reached scale 0, at 3 May 17 14:36:31.761: INFO: POD NODE PHASE GRACE CONDITIONS May 17 14:36:31.761: INFO: ss-0 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:31 +0000 UTC }] May 17 14:36:31.761: INFO: ss-1 iruya-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:31.761: INFO: ss-2 iruya-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:36:13 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-17 14:35:52 +0000 UTC }] May 17 14:36:31.761: INFO: May 17 14:36:31.761: INFO: StatefulSet ss has not reached scale 0, at 3 May 17 14:36:32.765: INFO: Verifying statefulset ss doesn't scale past 0 for another 915.658819ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4724 May 17 14:36:33.770: INFO: Scaling statefulset ss to 0 May 17 14:36:33.779: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 May 17 14:36:33.782: INFO: Deleting all statefulset in ns statefulset-4724 May 17 14:36:33.784: INFO: Scaling statefulset ss to 0 May 17 14:36:33.792: INFO: Waiting for statefulset status.replicas updated to 0 May 17 14:36:33.795: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:36:33.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4724" for this suite. May 17 14:36:39.867: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:36:39.945: INFO: namespace statefulset-4724 deletion completed in 6.132441348s • [SLOW TEST:68.320 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:36:39.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:36:48.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-551" for this suite. May 17 14:36:54.161: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:36:54.228: INFO: namespace kubelet-test-551 deletion completed in 6.088180962s • [SLOW TEST:14.283 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78 should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client May 17 14:36:54.228: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name secret-test-map-f36fba30-7587-4610-a383-bba20757859d STEP: Creating a pod to test consume secrets May 17 14:36:54.313: INFO: Waiting up to 5m0s for pod "pod-secrets-bb9911ed-5c28-4b56-ae2c-0262d71378d1" in namespace "secrets-5150" to be "success or failure" May 17 14:36:54.325: INFO: Pod "pod-secrets-bb9911ed-5c28-4b56-ae2c-0262d71378d1": Phase="Pending", Reason="", readiness=false. Elapsed: 11.414412ms May 17 14:36:56.328: INFO: Pod "pod-secrets-bb9911ed-5c28-4b56-ae2c-0262d71378d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015110557s May 17 14:36:58.333: INFO: Pod "pod-secrets-bb9911ed-5c28-4b56-ae2c-0262d71378d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019691886s STEP: Saw pod success May 17 14:36:58.333: INFO: Pod "pod-secrets-bb9911ed-5c28-4b56-ae2c-0262d71378d1" satisfied condition "success or failure" May 17 14:36:58.336: INFO: Trying to get logs from node iruya-worker pod pod-secrets-bb9911ed-5c28-4b56-ae2c-0262d71378d1 container secret-volume-test: STEP: delete the pod May 17 14:36:58.354: INFO: Waiting for pod pod-secrets-bb9911ed-5c28-4b56-ae2c-0262d71378d1 to disappear May 17 14:36:58.432: INFO: Pod pod-secrets-bb9911ed-5c28-4b56-ae2c-0262d71378d1 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 May 17 14:36:58.432: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5150" for this suite. May 17 14:37:04.452: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered May 17 14:37:04.553: INFO: namespace secrets-5150 deletion completed in 6.116659062s • [SLOW TEST:10.324 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SMay 17 14:37:04.553: INFO: Running AfterSuite actions on all nodes May 17 14:37:04.553: INFO: Running AfterSuite actions on node 1 May 17 14:37:04.553: INFO: Skipping dumping logs from cluster Ran 215 of 4412 Specs in 6077.367 seconds SUCCESS! -- 215 Passed | 0 Failed | 0 Pending | 4197 Skipped PASS